skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Let’s Unleash the Network Judgment: A Self-Supervised Approach for Cloud Image Analysis
Abstract Accurate cloud type identification and coverage analysis are crucial in understanding the Earth’s radiative budget. Traditional computer vision methods rely on low-level visual features of clouds for estimating cloud coverage or sky conditions. Several handcrafted approaches have been proposed; however, scope for improvement still exists. Newer deep neural networks (DNNs) have demonstrated superior performance for cloud segmentation and categorization. These methods, however, need expert engineering intervention in the preprocessing steps—in the traditional methods—or human assistance in assigning cloud or clear sky labels to a pixel for training DNNs. Such human mediation imposes considerable time and labor costs. We present the application of a new self-supervised learning approach to autonomously extract relevant features from sky images captured by ground-based cameras, for the classification and segmentation of clouds. We evaluate a joint embedding architecture that uses self-knowledge distillation plus regularization. We use two datasets to demonstrate the network’s ability to classify and segment sky images—one with ∼ 85,000 images collected from our ground-based camera and another with 400 labeled images from the WSISEG database. We find that this approach can discriminate full-sky images based on cloud coverage, diurnal variation, and cloud base height. Furthermore, it semantically segments the cloud areas without labels. The approach shows competitive performance in all tested tasks,suggesting a new alternative for cloud characterization.  more » « less
Award ID(s):
1935984
PAR ID:
10472768
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Artificial Intelligence for the Earth Systems
ISSN:
2769-7525
Page Range / eLocation ID:
1 to 34
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cloud cover estimation from images taken by sky-facing cameras can be an important input for analyzing current weather conditions and estimating photovoltaic power generation. The constant change in position, shape, and density of clouds, however, makes the development of a robust computational method for cloud cover estimation challenging. Accurately determining the edge of clouds and hence the separation between clouds and clear sky is difficult and often impossible. Toward determining cloud cover for estimating photovoltaic output, we propose using machine learning methods for cloud segmentation. We compare several methods including a classical regression model, deep learning methods, and boosting methods that combine results from the other machine learning models. To train each of the machine learning models with various sky conditions, we supplemented the existing Singapore whole sky imaging segmentation database with hazy and overcast images collected by a camera-equipped Waggle sensor node. We found that the U-Net architecture, one of the deep neural networks we utilized, segmented cloud pixels most accurately. However, the accuracy of segmenting cloud pixels did not guarantee high accuracy of estimating solar irradiance. We confirmed that the cloud cover ratio is directly related to solar irradiance. Additionally, we confirmed that solar irradiance and solar power output are closely related; hence, by predicting solar irradiance, we can estimate solar power output. This study demonstrates that sky-facing cameras with machine learning methods can be used to estimate solar power output. This ground-based approach provides an inexpensive way to understand solar irradiance and estimate production from photovoltaic solar facilities. 
    more » « less
  2. The increasing number of Photovoltaic (PV) systems connected to the power grid are vulnerable to the projection of shadows from moving clouds. Global Solar Irradiance (GSI) forecasting allows smart grids to optimize the energy dispatch, preventing energy shortages caused by occlusion of the sun. This investigation compares the performances of machine learning algorithms (not requiring labelled images for training) for realtime segmentation of clouds in images acquired using a ground-based infrared sky imager. Real-time segmentation is utilized to extract cloud features using only the pixels in which clouds are detected. 
    more » « less
  3. null (Ed.)
    The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method. 
    more » « less
  4. The segmentation of sky images into regions of cloud and clear sky allows atmospheric scientists to determine the fraction of cloud cover and the distribution of cloud without resorting to subjective estimates by a human observer. This is a challenging problem because cloud boundaries and cirroform cloud regions are often semi-transparent and indistinct. In this study, we propose a lightweight, unsupervised methodology to identify cloud regions in ground-based hemispherical sky images. Our method offers a fast and adaptive approach without the necessity of fixed thresholds by utilizing K-means clustering on transformed pixel values. We present the results of our method for two data sets and compare them with three different methods in the literature. 
    more » « less
  5. Abstract Information about the spatial distribution of species lies at the heart of many important questions in ecology. Logistical limitations and collection biases, however, limit the availability of such data at ecologically relevant scales. Remotely sensed information can alleviate some of these concerns, but presents challenges associated with accurate species identification and limited availability of field data for validation, especially in high diversity ecosystems such as tropical forests.Recent advances in machine learning offer a promising and cost‐efficient approach for gathering a large amount of species distribution data from aerial photographs. Here, we propose a novel machine learning framework, artificial perceptual learning (APL), to tackle the problem of weakly supervised pixel‐level mapping of tree species in forests. Challenges arise from limited availability of ground labels for tree species, lack of precise segmentation of tree canopies and misalignment between visible canopies in the aerial images and stem locations associated with ground labels. The proposed APL framework addresses these challenges by constructing a workflow using state‐of‐the‐art machine learning algorithms.We develop and illustrate the proposed framework by implementing a fine‐grain mapping of three species, the palmPrestoea acuminataand the tree speciesCecropia schreberianaandManilkara bidentata, over a 5,000‐ha area of El Yunque National Forest in Puerto Rico. These large‐scale maps are based on unlabelled high‐resolution aerial images of unsegmented tree canopies. Misaligned ground‐based labels, available for <1% of these images, serve as the only weak supervision. APL performance is evaluated using ground‐based labels and high‐quality human segmentation using Amazon Mechanical Turk, and compared to a basic workflow that relies solely on labelled images.Receiver operating characteristic (ROC) curves and Intersection over Union (IoU) metrics demonstrate that APL substantially outperforms the basic workflow and attains human‐level cognitive economy, with 50‐fold time savings. For the palm andC. schreberiana, the APL framework has high pixelwise accuracy and IoU with reference to human segmentations. ForM.bidentata, APL predictions are congruent with ground‐based labels. Our approach shows great potential for leveraging existing data from global forest plot networks coupled with aerial imagery to map tree species at ecologically meaningful spatial scales. 
    more » « less