The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 29 until 11:59 PM ET on Saturday, September 30 due to maintenance. We apologize for the inconvenience.
Explore Scholarly Publications and Datasets in the NSF-PAR
Title: Removing Stripes, Scratches, and Curtaining with Nonrecoverable Compressed Sensing
Abstract Highly-directional image artifacts such as ion mill curtaining, mechanical scratches, or image striping from beam instability degrade the interpretability of micrographs. These unwanted, aperiodic features extend the image along a primary direction and occupy a small wedge of information in Fourier space. Deleting this wedge of data replaces stripes, scratches, or curtaining, with more complex streaking and blurring artifacts—known within the tomography community as “missing wedge” artifacts. Here, we overcome this problem by recovering the missing region using total variation minimization, which leverages image sparsity-based reconstruction techniques—colloquially referred to as compressed sensing (CS)—to reliably restore images corrupted by stripe-like features. Our approach removes beam instability, ion mill curtaining, mechanical scratches, or any stripe features and remains robust at low signal-to-noise. The success of this approach is achieved by exploiting CS's inability to recover directional structures that are highly localized and missing in Fourier Space.
Moatti, Adele; Sachan, Ritesh; Prater, John; Narayan, Jagdish(
, Microscopy Research and Technique)
Abstract
This work provides the details of a simple and reliable method with less damage to prepare electron transparent samples for in situ studies in scanning/transmission electron microscopy. In this study, we use epitaxial VO2thin films grown on c‐Al2O3by pulsed laser deposition, which have a monoclinic–rutile transition at ~68°C. We employ an approach combining conventional mechanical wedge‐polishing and Focused Ion beam to prepare the electron transparent samples of epitaxial VO2thin films. The samples are first mechanically wedge‐polished and ion‐milled to be electron transparent. Subsequently, the thin region of VO2films are separated from the rest of the polished sample using a focused ion beam and transferred to the in situ electron microscopy test stage. As a critical step, carbon nanotubes are used as connectors to the manipulator needle for a soft transfer process. This is done to avoid shattering of the brittle substrate film on the in situ sample support stage during the transfer process. We finally present the atomically resolved structural transition in VO2films using this technique. This approach significantly increases the success rate of high‐quality sample preparation with less damage for in situ studies of thin films and reduces the cost and instrumental/user errors associated with other techniques.
Themore »present work highlights a novel, simple, reliable approach with reduced damage to make electron transparent samples for atomic‐scale insights of temperature‐dependent transitions in epitaxial thin film heterostructures using in situ TEM studies.
Abstract The freeform generation of active electronics can impart advanced optical, computational, or sensing capabilities to an otherwise passive construct by overcoming the geometrical and mechanical dichotomies between conventional electronics manufacturing technologies and a broad range of three-dimensional (3D) systems. Previous work has demonstrated the capability to entirely 3D print active electronics such as photodetectors and light-emitting diodes by leveraging an evaporation-driven multi-scale 3D printing approach. However, the evaporative patterning process is highly sensitive to print parameters such as concentration and ink composition. The assembly process is governed by the multiphase interactions between solutes, solvents, and the microenvironment. The process is susceptible to environmental perturbations and instability, which can cause unexpected deviation from targeted print patterns. The ability to print consistently is particularly important for the printing of active electronics, which require the integration of multiple functional layers. Here we demonstrate a synergistic integration of a microfluidics-driven multi-scale 3D printer with a machine learning algorithm that can precisely tune colloidal ink composition and classify complex internal features. Specifically, the microfluidic-driven 3D printer can rapidly modulate ink composition, such as concentration and solvent-to-cosolvent ratio, to explore multi-dimensional parameter space. The integration of the printer with an image-processing algorithm and a supportmore »vector machine-guided classification model enables automated, in situ pattern classification. We envision that such integration will provide valuable insights in understanding the complex evaporative-driven assembly process and ultimately enable an autonomous optimisation of printing parameters that can robustly adapt to unexpected perturbations.« less
Munshi, Joydeep; Rakowski, Alexander; Savitzky, Benjamin H.; Zeltmann, Steven E.; Ciston, Jim; Henderson, Matthew; Cholia, Shreyas; Minor, Andrew M.; Chan, Maria K. Y.; Ophus, Colin(
, npj Computational Materials)
Abstract
A fast, robust pipeline for strain mapping of crystalline materials is important for many technological applications. Scanning electron nanodiffraction allows us to calculate strain maps with high accuracy and spatial resolutions, but this technique is limited when the electron beam undergoes multiple scattering. Deep-learning methods have the potential to invert these complex signals, but require a large number of training examples. We implement a Fourier space, complex-valued deep-neural network, FCU-Net, to invert highly nonlinear electron diffraction patterns into the corresponding quantitative structure factor images. FCU-Net was trained using over 200,000 unique simulated dynamical diffraction patterns from different combinations of crystal structures, orientations, thicknesses, and microscope parameters, which are augmented with experimental artifacts. We evaluated FCU-Net against simulated and experimental datasets, where it substantially outperforms conventional analysis methods. Our code, models, and training library are open-source and may be adapted to different diffraction measurement problems.
BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information ofmore »an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′ ≈ w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ .« less
Conventional ptychography translates an object through a localized probe beam to widen the field of view in real space. Fourier ptychography translates the object spectrum through a pupil aperture to expand the Fourier bandwidth in reciprocal space. Here we report an imaging modality, termed synthetic aperture ptychography (SAP), to get the best of both techniques. In SAP, we illuminate a stationary object using an extended plane wave and translate a coded image sensor at the far field for data acquisition. The coded layer attached on the sensor modulates the object exit waves and serves as an effective ptychographic probe for phase retrieval. The sensor translation process in SAP synthesizes a large complex-valued wavefront at the intermediate aperture plane. By propagating this wavefront back to the object plane, we can widen the field of view in real space and expand the Fourier bandwidth in reciprocal space simultaneously. We validate the SAP approach with transmission targets and reflection silicon microchips. A 20-mm aperture was synthesized using a 5-mm sensor, achieving a fourfold gain in resolution and 16-fold gain in field of view for object recovery. In addition, the thin sample requirement in ptychography is no longer required in SAP. One can digitallymore »propagate the recovered exit wave to any axial position for post-acquisition refocusing. The SAP scheme offers a solution for far-field sub-diffraction imaging without using lenses. It can be adopted in coherent diffraction imaging setups with radiation sources from visible light, extreme ultraviolet, and X-ray, to electron.
@article{osti_10106116,
place = {Country unknown/Code not available},
title = {Removing Stripes, Scratches, and Curtaining with Nonrecoverable Compressed Sensing},
url = {https://par.nsf.gov/biblio/10106116},
DOI = {10.1017/S1431927619000254},
abstractNote = {Abstract Highly-directional image artifacts such as ion mill curtaining, mechanical scratches, or image striping from beam instability degrade the interpretability of micrographs. These unwanted, aperiodic features extend the image along a primary direction and occupy a small wedge of information in Fourier space. Deleting this wedge of data replaces stripes, scratches, or curtaining, with more complex streaking and blurring artifacts—known within the tomography community as “missing wedge” artifacts. Here, we overcome this problem by recovering the missing region using total variation minimization, which leverages image sparsity-based reconstruction techniques—colloquially referred to as compressed sensing (CS)—to reliably restore images corrupted by stripe-like features. Our approach removes beam instability, ion mill curtaining, mechanical scratches, or any stripe features and remains robust at low signal-to-noise. The success of this approach is achieved by exploiting CS's inability to recover directional structures that are highly localized and missing in Fourier Space.},
journal = {Microscopy and Microanalysis},
volume = {25},
number = {3},
author = {Schwartz, Jonathan and Jiang, Yi and Wang, Yongjie and Aiello, Anthony and Bhattacharya, Pallab and Yuan, Hui and Mi, Zetian and Bassim, Nabil and Hovden, Robert},
}