skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: LBIC Imaging of Solar Cells: An Introduction to Scanning Probe-Based Imaging Techniques
Scanning probe-based microscopes (SPMs) are widely used in biology, chemistry, materials science, and physics to image and manipulate matter on the nanoscale. Unfortunately, high school and university departments lack expensive SPM tools and materials microscopy activities to educate a large number of students in this vital SPM imaging technique. As a result, students face challenges participating in and contributing value to the nanotechnology revolution driving modern scientific innovations. Here we demonstrate an affordable scanning laser-based imaging system (approximately $400, excluding the computer) to introduce students to the point-by-point image formation process underlying SPM methods. In this laboratory activity, students learn how to construct and optimize images of a working solar panel using a laser beam-induced current (LBIC) imaging system. We envision undergraduate and graduate students should be able to use this LBIC system for independent solar energy research projects as well as apply fundamental knowledge and measurement skills to understand other SPM techniques.  more » « less
Award ID(s):
2046948
PAR ID:
10394417
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of Chemical Education
ISSN:
0021-9584
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be miti gated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point -scanning super-resolution (PSSR) imaging. Oversampled ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were used to generate semi-synthetictrain ing data for PSSR models that were then used to restore undersampled images. Remarkably, our EM PSSR model was able to restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs . PSSR enabled previously unattainable xy resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spati al resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity. 
    more » « less
  2. In this paper, we have used the angular spectrum propagation method and numerical simulations of a single random phase encoding (SRPE) based lensless imaging system, with the goal of quantifying the spatial resolution of the system and assessing its dependence on the physical parameters of the system. Our compact SRPE imaging system consists of a laser diode that illuminates a sample placed on a microscope glass slide, a diffuser that spatially modulates the optical field transmitting through the input object, and an image sensor that captures the intensity of the modulated field. We have considered two-point source apertures as the input object and analyzed the propagated optical field captured by the image sensor. The captured output intensity patterns acquired at each lateral separation between the input point sources were analyzed using a correlation between the captured output pattern for the overlapping point-sources, and the captured output intensity for the separated point sources. The lateral resolution of the system was calculated by finding the lateral separation values of the point sources for which the correlation falls below a threshold value of 35% which is a value chosen in accordance with the Abbe diffraction limit of an equivalent lens-based system. A direct comparison between the SRPE lensless imaging system and an equivalent lens-based imaging system with similar system parameters shows that despite being lensless, the performance of the SRPE system does not suffer as compared to lens-based imaging systems in terms of lateral resolution. We have also investigated how this resolution is affected as the parameters of the lensless imaging system are varied. The results show that SRPE lensless imaging system shows robustness to object to diffuser-to-sensor distance, pixel size of the image sensor, and the number of pixels of the image sensor. To the best of our knowledge, this is the first work to investigate a lensless imaging system’s lateral resolution, robustness to multiple physical parameters of the system, and comparison to lens-based imaging systems. 
    more » « less
  3. The need for high-speed imaging in applications such as biomedicine, surveillance, and consumer electronics has called for new developments of imaging systems. While the industrial effort continuously pushes the advance of silicon focal plane array image sensors, imaging through a single-pixel detector has gained significant interest thanks to the development of computational algorithms. Here, we present a new imaging modality, deep compressed imaging via optimized-pattern scanning, which can significantly increase the acquisition speed for a single-detector-based imaging system. We project and scan an illumination pattern across the object and collect the sampling signal with a single-pixel detector. We develop an innovative end-to-end optimized auto-encoder, using a deep neural network and compressed sensing algorithm, to optimize the illumination pattern, which allows us to reconstruct faithfully the image from a small number of measurements, with a high frame rate. Compared with the conventional switching-mask-based single-pixel camera and point-scanning imaging systems, our method achieves a much higher imaging speed, while retaining a similar imaging quality. We experimentally validated this imaging modality in the settings of both continuous-wave illumination and pulsed light illumination and showed high-quality image reconstructions with a high compressed sampling rate. This new compressed sensing modality could be widely applied in different imaging systems, enabling new applications that require high imaging speeds. 
    more » « less
  4. Goda, Keisuke; Tsia, Kevin K. (Ed.)
    We present a new deep compressed imaging modality by scanning a learned illumination pattern on the sample and detecting the signal with a single-pixel detector. This new imaging modality allows a compressed sampling of the object, and thus a high imaging speed. The object is reconstructed through a deep neural network inspired by compressed sensing algorithm. We optimize the illumination pattern and the image reconstruction network by training an end-to-end auto-encoder framework. Comparing with the conventional single-pixel camera and point-scanning imaging system, we accomplish a high-speed imaging with a reduced light dosage, while preserving a high imaging quality. 
    more » « less
  5. Watrall, Ethan; Goldstein, Lynne (Ed.)
    The transition to digital approaches in archaeology includes moving from 2D to 3D images of artifacts. This paper includes a discussion of creating 3D images of artifacts in research with students, formally through a course, and informally in a 3D lab and during field research. Students participate in an ongoing research project by 3D digital imaging objects and contextualizing them. The benefits of 3D images of artifacts are discussed for research, instruction, and public outreach (including making 3D printed replicas for teaching and exhibits). In the 3D digital imaging course, students use surface laser scanners to image small objects that would be encountered in an archaeological excavation, with objects of increasing difficulty to image over the course of the semester. Mid-way through the course, each student is assigned an artifact for a project to include 3D laser scanning and photogrammetry, digital measuring, and research. Students write weekly blog updates on a web page they each create. Students learn to measure digital images and manipulate them with other software. Open source software is encouraged, when available. Options for viewing 3D images are discussed so students can link 3D scans to their web pages. Students prepare scans for 3D printing in the Digital Imaging and Visualization (DIVA) Lab. This paper includes a discussion of research and instruction in the DIVA Lab, the Maya field project that created the need for the DIVA Lab, and the use of 3D technology in research and heritage studies in the Maya area. 
    more » « less