skip to main content


Title: Removing Stripes, Scratches, and Curtaining with Nonrecoverable Compressed Sensing
Abstract Highly-directional image artifacts such as ion mill curtaining, mechanical scratches, or image striping from beam instability degrade the interpretability of micrographs. These unwanted, aperiodic features extend the image along a primary direction and occupy a small wedge of information in Fourier space. Deleting this wedge of data replaces stripes, scratches, or curtaining, with more complex streaking and blurring artifacts—known within the tomography community as “missing wedge” artifacts. Here, we overcome this problem by recovering the missing region using total variation minimization, which leverages image sparsity-based reconstruction techniques—colloquially referred to as compressed sensing (CS)—to reliably restore images corrupted by stripe-like features. Our approach removes beam instability, ion mill curtaining, mechanical scratches, or any stripe features and remains robust at low signal-to-noise. The success of this approach is achieved by exploiting CS's inability to recover directional structures that are highly localized and missing in Fourier Space.  more » « less
Award ID(s):
1807984
NSF-PAR ID:
10106116
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Microscopy and Microanalysis
Volume:
25
Issue:
3
ISSN:
1431-9276
Page Range / eLocation ID:
705 to 710
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    This work provides the details of a simple and reliable method with less damage to prepare electron transparent samples for in situ studies in scanning/transmission electron microscopy. In this study, we use epitaxial VO2thin films grown on c‐Al2O3by pulsed laser deposition, which have a monoclinic–rutile transition at ~68°C. We employ an approach combining conventional mechanical wedge‐polishing and Focused Ion beam to prepare the electron transparent samples of epitaxial VO2thin films. The samples are first mechanically wedge‐polished and ion‐milled to be electron transparent. Subsequently, the thin region of VO2films are separated from the rest of the polished sample using a focused ion beam and transferred to the in situ electron microscopy test stage. As a critical step, carbon nanotubes are used as connectors to the manipulator needle for a soft transfer process. This is done to avoid shattering of the brittle substrate film on the in situ sample support stage during the transfer process. We finally present the atomically resolved structural transition in VO2films using this technique. This approach significantly increases the success rate of high‐quality sample preparation with less damage for in situ studies of thin films and reduces the cost and instrumental/user errors associated with other techniques.

    The present work highlights a novel, simple, reliable approach with reduced damage to make electron transparent samples for atomic‐scale insights of temperature‐dependent transitions in epitaxial thin film heterostructures using in situ TEM studies.

     
    more » « less
  2. Abstract The freeform generation of active electronics can impart advanced optical, computational, or sensing capabilities to an otherwise passive construct by overcoming the geometrical and mechanical dichotomies between conventional electronics manufacturing technologies and a broad range of three-dimensional (3D) systems. Previous work has demonstrated the capability to entirely 3D print active electronics such as photodetectors and light-emitting diodes by leveraging an evaporation-driven multi-scale 3D printing approach. However, the evaporative patterning process is highly sensitive to print parameters such as concentration and ink composition. The assembly process is governed by the multiphase interactions between solutes, solvents, and the microenvironment. The process is susceptible to environmental perturbations and instability, which can cause unexpected deviation from targeted print patterns. The ability to print consistently is particularly important for the printing of active electronics, which require the integration of multiple functional layers. Here we demonstrate a synergistic integration of a microfluidics-driven multi-scale 3D printer with a machine learning algorithm that can precisely tune colloidal ink composition and classify complex internal features. Specifically, the microfluidic-driven 3D printer can rapidly modulate ink composition, such as concentration and solvent-to-cosolvent ratio, to explore multi-dimensional parameter space. The integration of the printer with an image-processing algorithm and a support vector machine-guided classification model enables automated, in situ pattern classification. We envision that such integration will provide valuable insights in understanding the complex evaporative-driven assembly process and ultimately enable an autonomous optimisation of printing parameters that can robustly adapt to unexpected perturbations. 
    more » « less
  3. Abstract

    A fast, robust pipeline for strain mapping of crystalline materials is important for many technological applications. Scanning electron nanodiffraction allows us to calculate strain maps with high accuracy and spatial resolutions, but this technique is limited when the electron beam undergoes multiple scattering. Deep-learning methods have the potential to invert these complex signals, but require a large number of training examples. We implement a Fourier space, complex-valued deep-neural network, FCU-Net, to invert highly nonlinear electron diffraction patterns into the corresponding quantitative structure factor images. FCU-Net was trained using over 200,000 unique simulated dynamical diffraction patterns from different combinations of crystal structures, orientations, thicknesses, and microscope parameters, which are augmented with experimental artifacts. We evaluated FCU-Net against simulated and experimental datasets, where it substantially outperforms conventional analysis methods. Our code, models, and training library are open-source and may be adapted to different diffraction measurement problems.

     
    more » « less
  4. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less
  5. ABSTRACT

    Measurements of the one-point probability distribution function and higher-order moments (variance, skewness, and kurtosis) of the high-redshift 21-cm fluctuations are among the most direct statistical probes of the non-Gaussian nature of structure formation and evolution during re-ionization. However, contamination from astrophysical foregrounds and instrument systematics pose significant challenges in measuring these statistics in real observations. In this work, we use forward modelling to investigate the feasibility of measuring 21-cm one-point statistics through a foreground avoidance strategy. Leveraging the characteristic wedge-shape of the foregrounds in k-space, we apply a wedge-cut filtre that removes the foreground contaminated modes from a mock data set based on the Hydrogen Epoch of Re-ionization Array (HERA) instrument, and measure the one-point statistics from the image-space representation of the remaining non-contaminated modes. We experiment with varying degrees of wedge-cutting over different frequency bandwidths and find that the centre of the band is the least susceptible to bias from wedge-cutting. Based on this finding, we introduce a rolling filtre method that allows reconstruction of an optimal wedge-cut 21-cm intensity map over the full bandwidth using outputs from wedge-cutting over multiple sub-bands. We perform Monte Carlo simulations to show that HERA should be able to measure the rise in skewness and kurtosis near the end of re-ionization with the rolling wedge-cut method if foreground leakage from the Fourier transform window function can be controlled.

     
    more » « less