skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 1, 2026

Title: Characterization of residual charge images in LSST camera e2v CCDs
Abstract LSST Camera CCDs produced by the manufacturer e2v exhibit strong and novel residual charge images when exposed to bright sources. These manifest in images following bright exposures both in the same pixel areas as the bright source, and in the pixels trailing between the source and the serial register. Both of these pose systematic challenges to the Rubin Observatory Legacy Survey of Space and Time instrument signature removal. The latter trail region is especially impactful as it affects a much larger pixel area in a less well defined position. In our study of this effect at UC Davis, we imaged bright spots to characterize these residual charge effects. We find a strong dependence of the residual charge on the parallel clocking scheme, including the relative levels of the clocking voltages, and the timing of gate phase transition during the parallel transfer. Our study points to independent causes of residual charge in the bright spot region and trail region. We propose potential causes in both regions and suggest methodologies for minimizing residual charge. We consider the trade-offs of these methods including decreasing the camera's full well and dynamic range at the high end. The voltage scheme in the main camera was altered to address this effect accordingly.  more » « less
Award ID(s):
2205095
PAR ID:
10632906
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Journal of instrumentation
Date Published:
Journal Name:
Journal of instrumentation
Volume:
20
Issue:
07
ISSN:
1748-0221
Page Range / eLocation ID:
P07031
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The CloudPatch-7 Hyperspectral Dataset comprises a manually curated collection of hyperspectral images, focused on pixel classification of atmospheric cloud classes. This labeled dataset features 380 patches, each a 50x50 pixel grid, derived from 28 larger, unlabeled parent images approximately 5000x1500 pixels in size. Captured using the Resonon PIKA XC2 camera, these images span 462 spectral bands from 400 to 1000 nm. Each patch is extracted from a parent image ensuring that its pixels fall within one of seven atmospheric conditions: Dense Dark Cumuliform Cloud, Dense Bright Cumuliform Cloud, Semi-transparent Cumuliform Cloud, Dense Cirroform Cloud, Semi-transparent Cirroform Cloud, Clear Sky - Low Aerosol Scattering (dark), and Clear Sky - Moderate to High Aerosol Scattering (bright). Incorporating contextual information from surrounding pixels enhances pixel classification into these 7 classes, making this dataset a valuable resource for spectral analysis, environmental monitoring, atmospheric science research, and testing machine learning applications that require contextual data. Parent images are very big in size, but they can be made available upon request. 
    more » « less
  2. We outline the scientific motivation for reducing the systematics in the image sensors used in the LSST. Some examples are described, leading to lab investigations. The CCD250 (Teledyne-e2v) and STA3900 Imaging Technology Laboratory (ITL) charge-coupled devices (CCDs) used in Rubin Observatory’s LSSTCam are tested under realistic LSST f/1.2 optical beam in a lab setup. In the past, this facility has been used to characterize these CCDs, exploring the systematic errors due to charge transport. Now, this facility is being used to optimize the clocking scheme and voltages. The effect of different clocking schemes on the on-chip systematics such as non-linear crosstalk, noise, persistence, and photon transfer is explored. The goal is to converge on an optimal configuration for the LSSTCam CCDs, which minimizes resulting dark energy science systematics. 
    more » « less
  3. We present a parallelized optimization method based on fast Neural Radiance Fields (NeRF) for estimating 6-DoF pose of a camera with respect to an object or scene. Given a single observed RGB image of the target, we can predict the translation and rotation of the camera by minimizing the residual between pixels rendered from a fast NeRF model and pixels in the observed image. We integrate a momentum-based camera extrinsic optimization procedure into Instant Neural Graphics Primitives, a recent exceptionally fast NeRF implementation. By introducing parallel Monte Carlo sampling into the pose estimation task, our method overcomes local minima and improves efficiency in a more extensive search space. We also show the importance of adopting a more robust pixel-based loss function to reduce error. Experiments demonstrate that our method can achieve improved generalization and robustness on both synthetic and real-world benchmarks. 
    more » « less
  4. Evans, Christopher J.; Bryant, Julia J.; Motohara, Kentaro (Ed.)
    NIRSPEC is a high-resolution near-infrared echelle spectrograph on the Keck II telescope that was commissioned in 1999 and upgraded in 2018. This recent upgrade was aimed at improving the sensitivity and longevity of the instrument through the replacement of the spectrometer science detector (SPEC) and slit-viewing camera (SCAM). Commissioning began in 2018 December, producing the first on-sky images used in the characterization of the upgraded system. Through the use of photometry and spectroscopy of standard stars and internal calibration lamps, we assess the performance of the upgraded SPEC and SCAM detectors. First, we evaluate the gain, readnoise, dark current, and the charge persistence of the spec detector. We then characterize the newly upgraded spectrometer and the resulting improvements in sensitivity, including spectroscopic zero points, pixel scale, and resolving power across the spectrometer detector field. Finally, for SCAM, we present zero points, pixel scale, and provide a map of the geometric distortion of the camera. 
    more » « less
  5. It is a common practice to think of a video as a sequence of images (frames), and re-use deep neural network models that are trained only on images for similar analytics tasks on videos. In this paper, we show that this “leap of faith” that deep learning models that work well on images will also work well on videos is actually flawed.We show that even when a video camera is viewing a scene that is not changing in any humanperceptible way, and we control for external factors like video compression and environment (lighting), the accuracy of video analytics application fluctuates noticeably. These fluctuations occur because successive frames produced by the video camera may look similar visually, but are perceived quite differently by the video analytics applications.We observed that the root cause for these fluctuations is the dynamic camera parameter changes that a video camera automatically makes in order to capture and produce a visually pleasing video. The camera inadvertently acts as an “unintentional adversary” because these slight changes in the image pixel values in consecutive frames, as we show, have a noticeably adverse impact on the accuracy of insights from video analytics tasks that re-use image-trained deep learning models. To address this inadvertent adversarial effect from the camera, we explore the use of transfer learning techniques to improve learning in video analytics tasks through the transfer of knowledge from learning on image analytics tasks. Our experiments with a number of different cameras, and a variety of different video analytics tasks, show that the inadvertent adversarial effect from the camera can be noticeably offset by quickly re-training the deep learning models using transfer learning. In particular, we show that our newly trained Yolov5 model reduces fluctuation in object detection across frames, which leads to better tracking of objects (∼40% fewer mistakes in tracking). Our paper also provides new directions and techniques to mitigate the camera’s adversarial effect on deep learning models used for video analytics applications. 
    more » « less