skip to main content


Title: Reflection Characteristics of an Extended Hemispherical Lens for THz Time Domain Spectroscopy
We analyze the Gaussian character of multiple reflections in extended hemispherical lenses which are widely used in terahertz spectroscopy. In particular, the first, second and third order reflections from a high-refractive-index extended hemispherical lens illuminated by a plane wave are characterized using high-frequency approximation. To demonstrate the importance of the Gaussicity of the incident and reflected beams on coupled power levels, we study a quasi-optical link involving a horn antenna and an off-axis parabolic reflector. Although such multiple reflections with distinctly different Gaussian character can be time-gated in time-domain spectroscopy systems, care must be exercised in continuous wave systems. Depending on the quasi-optical link, i.e. positioning of the antennas and reflectors, second and third-order reflections may induce significant variations of the sensed signal.  more » « less
Award ID(s):
1710977
NSF-PAR ID:
10104339
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Workshop on Antenna Theory
Volume:
1
Issue:
1
Page Range / eLocation ID:
1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Multiple reflections from electrically large hemispherical lens surfaces of lens-integrated antennas are investigated using an iterative Huygens’ integral approach. In particular for mmW- and THz-band applications, double-slot antennas on extended hemispherical high-resistivity Silicon lenses have been widely used due to the high Gaussisicity of their radiation/ reception patterns. Previous studies assumed an electrically-large lens and evaluated the antenna pattern using first-order physical optics approximation. Although this approach is fairly accurate for estimating the radiation pattern of such antennas, the reception pattern and the associated performance of receiving sensors need a more careful consideration due to the relatively large level of internal reflections from the concave boundary of the high index lens. Here, we present an iterative method to compute and study the effects of multiple reflections inside electrically large lenses. The rich nature of quasi-optical wave behavior is demonstrated through several examples corresponding to individual bounces of the incident, reflected, and transmitted waves from a double slot antenna. 
    more » « less
  2. The nonlinear Gaussian-noise (GN) model is a useful analytical tool for the estimation of the impact of distortion due to Kerr nonlinearity on the performance of coherent optical communications systems with no inline dispersion compensation. The original nonlinear GN model was formulated for coherent optical communications systems with identical single-mode fiber spans. Since its inception, the original GN model has been modified for a variety of link configurations. However, its application to coherent optical communications systems with hybrid fiber spans, each composed of multiple fiber segments with different attributes, has attracted scarcely any attention. This invited paper is dedicated to the extended nonlinear GN model for coherent optical communications systems with hybrid fiber spans. We review the few publications on the topic and provide a unified formalism for the analytical calculation of the nonlinear noise variance. To illustrate the usefulness of the extended nonlinear GN model, we apply it to coherent optical communications systems with fiber spans composed of a quasi-single-mode fiber segment and a single-mode fiber segment in tandem. In this configuration, a quasi-single-mode fiber with large effective area is placed at the beginning of each span, to reduce most of the nonlinear distortion, followed by a single-mode fiber segment with smaller effective-area, to limit the multipath interference introduced by the quasi-single-mode fiber to acceptable levels. We show that the optimal fiber splitting ratio per span can be calculated with sufficient accuracy using the extended nonlinear GN model for hybrid fiber spans presented here. 
    more » « less
  3. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/. 
    more » « less
  4. BACKGROUND Electromagnetic (EM) waves underpin modern society in profound ways. They are used to carry information, enabling broadcast radio and television, mobile telecommunications, and ubiquitous access to data networks through Wi-Fi and form the backbone of our modern broadband internet through optical fibers. In fundamental physics, EM waves serve as an invaluable tool to probe objects from cosmic to atomic scales. For example, the Laser Interferometer Gravitational-Wave Observatory and atomic clocks, which are some of the most precise human-made instruments in the world, rely on EM waves to reach unprecedented accuracies. This has motivated decades of research to develop coherent EM sources over broad spectral ranges with impressive results: Frequencies in the range of tens of gigahertz (radio and microwave regimes) can readily be generated by electronic oscillators. Resonant tunneling diodes enable the generation of millimeter (mm) and terahertz (THz) waves, which span from tens of gigahertz to a few terahertz. At even higher frequencies, up to the petahertz level, which are usually defined as optical frequencies, coherent waves can be generated by solid-state and gas lasers. However, these approaches often suffer from narrow spectral bandwidths, because they usually rely on well-defined energy states of specific materials, which results in a rather limited spectral coverage. To overcome this limitation, nonlinear frequency-mixing strategies have been developed. These approaches shift the complexity from the EM source to nonresonant-based material effects. Particularly in the optical regime, a wealth of materials exist that support effects that are suitable for frequency mixing. Over the past two decades, the idea of manipulating these materials to form guiding structures (waveguides) has provided improvements in efficiency, miniaturization, and production scale and cost and has been widely implemented for diverse applications. ADVANCES Lithium niobate, a crystal that was first grown in 1949, is a particularly attractive photonic material for frequency mixing because of its favorable material properties. Bulk lithium niobate crystals and weakly confining waveguides have been used for decades for accessing different parts of the EM spectrum, from gigahertz to petahertz frequencies. Now, this material is experiencing renewed interest owing to the commercial availability of thin-film lithium niobate (TFLN). This integrated photonic material platform enables tight mode confinement, which results in frequency-mixing efficiency improvements by orders of magnitude while at the same time offering additional degrees of freedom for engineering the optical properties by using approaches such as dispersion engineering. Importantly, the large refractive index contrast of TFLN enables, for the first time, the realization of lithium niobate–based photonic integrated circuits on a wafer scale. OUTLOOK The broad spectral coverage, ultralow power requirements, and flexibilities of lithium niobate photonics in EM wave generation provides a large toolset to explore new device functionalities. Furthermore, the adoption of lithium niobate–integrated photonics in foundries is a promising approach to miniaturize essential bench-top optical systems using wafer scale production. Heterogeneous integration of active materials with lithium niobate has the potential to create integrated photonic circuits with rich functionalities. Applications such as high-speed communications, scalable quantum computing, artificial intelligence and neuromorphic computing, and compact optical clocks for satellites and precision sensing are expected to particularly benefit from these advances and provide a wealth of opportunities for commercial exploration. Also, bulk crystals and weakly confining waveguides in lithium niobate are expected to keep playing a crucial role in the near future because of their advantages in high-power and loss-sensitive quantum optics applications. As such, lithium niobate photonics holds great promise for unlocking the EM spectrum and reshaping information technologies for our society in the future. Lithium niobate spectral coverage. The EM spectral range and processes for generating EM frequencies when using lithium niobate (LN) for frequency mixing. AO, acousto-optic; AOM, acousto-optic modulation; χ (2) , second-order nonlinearity; χ (3) , third-order nonlinearity; EO, electro-optic; EOM, electro-optic modulation; HHG, high-harmonic generation; IR, infrared; OFC, optical frequency comb; OPO, optical paramedic oscillator; OR, optical rectification; SCG, supercontinuum generation; SHG, second-harmonic generation; UV, ultraviolet. 
    more » « less
  5. Abstract

    A hemispherical pattern in inner core attenuation (Q) has been observed from the apparentQof inner core transmitted waves (PKPDF). How the elastic scattering (QSC) originating from kilometer‐scale inner core heterogeneities relates to large‐scale apparentQvariations remains elusive. Such inner core scattered (ICS) energy is characterized by emergent, long‐lasting, high‐frequency (1–4 Hz) coda immediately following pre‐critical reflections (PKiKP) from the inner core boundary (ICB). In this study, we develop a framework to systematically investigate ICS and examine its hemispherical pattern using data from two arrays that sample opposing sides of the Pacific quasi‐hemispherical boundary. We use all the viable data from earthquakes (Mw ≥ 5.8) within the past 2–3 decades recorded at distances of 50°–75° by YKA and ILAR—small‐aperture (∼10–20 km) seismic arrays situated in northwestern North America. Two independent waveform stripping methods are applied to extract ICS from the background energy, and various stacks are performed to identify ICS and its spatial pattern. We found that individual beams lacking clear evidence ofPKiKPcoda waves reveal ICS characteristics when stacked together, implying that ICS is ubiquitous. We used a modified phonon‐based simulation to reinforce the idea that ICS is primarily created by volumetric heterogeneity within the inner core as opposed to ICB topography. With simplified two‐layer ICS models, our results suggest that ICS within the eastern quasi‐hemisphere is slightly stronger than in the western quasi‐hemisphere, although intra‐hemispherical variations are as significant, and our sampling is limited to patches of the northern hemisphere.

     
    more » « less