skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2018
RGB mosaic of 618 images captured with a Lumix GCM1 camera system aboard a Robota Triton XL UAV. Images were captured around solar noon at approximately 80m above the ground. Spatial resolution is 3 cm.  more » « less
Award ID(s):
2022070
PAR ID:
10591404
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Harvard Dataverse
Date Published:
Subject(s) / Keyword(s):
Earth and Environmental Sciences
Format(s):
Medium: X Size: 535791984 Other: image/tiff
Size(s):
535791984
Location:
Sweden, Lapland,, Abisko,, Stordalen,; (East Bound Longitude:19.049; North Bound Latitude:68.358; South Bound Latitude:68.352; West Bound Longitude:19.044)
Right(s):
Custom terms specific to this dataset
Sponsoring Org:
National Science Foundation
More Like this
  1. RGB mosaic of 2500 images extracted from video captured with a Lecia camera system aboard a Mavic 2 Pro UAV. Images were captured at solar noon at approximately 80 m above the ground. Spatial resolution is 3 cm. 
    more » « less
  2. A self-driving car must be able to reliably handle adverse weather conditions (e.g., snowy) to operate safely. In this paper, we investigate the idea of turning sensor inputs (i.e., images) captured in an adverse condition into a benign one (i.e., sunny), upon which the downstream tasks (e.g., semantic segmentation) can attain high accuracy. Prior work primarily formulates this as an unpaired image-to-image translation problem due to the lack of paired images captured under the exact same camera poses and semantic layouts. While perfectly- aligned images are not available, one can easily obtain coarsely- paired images. For instance, many people drive the same routes daily in both good and adverse weather; thus, images captured at close-by GPS locations can form a pair. Though data from repeated traversals are unlikely to capture the same foreground objects, we posit that they provide rich contextual information to supervise the image translation model. To this end, we propose a novel training objective leveraging coarsely- aligned image pairs. We show that our coarsely-aligned training scheme leads to a better image translation quality and improved downstream tasks, such as semantic segmentation, monocular depth estimation, and visual localization. 
    more » « less
  3. The dataset contains aerial photographs of Arctic sea ice obtained during the Healy-Oden Trans Arctic Expedition (HOTRAX) captured from a helicopter between 5 August and 30 September, 2005. A total of 1013 images were captured, but only 100 images were labeled. The subset of 100 images was created exclusively for the purpose of segmenting sea ice, meltponds, and open water. Original images, labels, and code for segmentation are included in the above files. For dataset, refer site: Ivan Sudakow, Vijayan Asari, Ruixu Liu, & Denis Demchev. (2022). Melt pond from aerial photographs of the Healy–Oden Trans Arctic Expedition (HOTRAX) (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6602409 Manuscript: I. Sudakow, V. K. Asari, R. Liu and D. Demchev, "MeltPondNet: A Swin Transformer U-Net for Detection of Melt Ponds on Arctic Sea Ice," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 8776-8784, 2022, doi: 10.1109/JSTARS.2022.3213192. 
    more » « less
  4. null (Ed.)
    We introduce DeepIR, a new thermal image processing framework that combines physically accurate sensor modeling with deep network-based image representation. Our key enabling observations are that the images captured by thermal sensors can be factored into slowly changing, scene-independent sensor non-uniformities (that can be accurately modeled using physics) and a scene-specific radiance flux (that is well-represented using a deep network-based regularizer). DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target--making it well suited for practical computer vision tasks. We demonstrate the power of going DeepIR by developing new denoising and super-resolution algorithms that exploit multiple images of the scene captured with camera jitter. Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images, achieving a 10dB PSNR improvement over competing approaches. 
    more » « less
  5. We introduce DeepIR, a new thermal image processing framework that combines physically accurate sensor modeling with deep network-based image representation. Our key enabling observations are that the images captured by thermal sensors can be factored into slowly changing, scene-independent sensor non-uniformities (that can be accurately modeled using physics) and a scene-specific radiance flux (that is well-represented using a deep network-based regularizer). DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target--making it well suited for practical computer vision tasks. We demonstrate the power of going DeepIR by developing new denoising and super-resolution algorithms that exploit multiple images of the scene captured with camera jitter. Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images, achieving a 10dB PSNR improvement over competing approaches. 
    more » « less