skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: DeepRoof: A Data-driven Approach For Solar Potential Estimation Using Rooftop Imagery
Rooftop solar deployments are an excellent source for generating clean energy. As a result, their popularity among homeowners has grown significantly over the years. Unfortunately, estimating the solar potential of a roof requires homeowners to consult solar consultants, who manually evaluate the site. Recently there have been efforts to automatically estimate the solar potential for any roof within a city. However, current methods work only for places where LIDAR data is available, thereby limiting their reach to just a few places in the world. In this paper, we propose DeepRoof, a data-driven approach that uses widely available satellite images to assess the solar potential of a roof. Using satellite images, DeepRoof determines the roof's geometry and leverages publicly available real-estate and solar irradiance data to provide a pixel-level estimate of the solar potential for each planar roof segment. Such estimates can be used to identify ideal locations on the roof for installing solar panels. Further, we evaluate our approach on an annotated roof dataset, validate the results with solar experts and compare it to a LIDAR-based approach. Our results show that DeepRoof can accurately extract the roof geometry such as the planar roof segments and their orientation, achieving a true positive rate of 91.1% in identifying roofs and a low mean orientation error of 9.3 degree. We also show that DeepRoof's median estimate of the available solar installation area is within 11% of a LIDAR-based approach.  more » « less
Award ID(s):
1645952 1534080 1405826 1505422
PAR ID:
10163746
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
Page Range / eLocation ID:
2105 to 2113
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan. 
    more » « less
  2. Abstract Residential solar installations are becoming increasingly popular among homeowners. However, renters and homeowners living in shared buildings cannot go solar as they do not own the shared spaces. Community-owned solar arrays and energy storage have emerged as a solution, which enables ownership even when they do not own the property or roof. However, such community-owned systems do not allow individuals to control their share for optimizing a home’s electricity bill. To overcome this limitation, inspired by the concept of virtualization in operating systems, we propose virtual community-owned solar and storage—a logical abstraction to allow individuals to independently control their share of the system. We argue that such individual control can benefit all owners and reduce their reliance on grid power. We present mechanisms and algorithms to provide a virtual solar and battery abstraction to users and understand their cost benefits. In doing so, our comparison with a traditional community-owned system shows that our AutoShare approach can achieve the same global savings of 43% while providing independent control of the virtual system. Further, we show that independent energy sharing through virtualization provides an additional 8% increase in savings to individual owners. 
    more » « less
  3. Abstract ContextWildland-urban interface (WUI) areas are facing increased forest fire risks and extreme precipitation events due to climate change, which can lead to post-fire flood events. The city of Flagstaff in northern Arizona, USA experienced WUI forest thinning, fire, and record rainfall events, which collectively contributed to large floods and damages to the urban neighborhoods and city infrastructure. ObjectivesWe demonstrate multi-temporal, high resolution image applications from an unoccupied aerial vehicle (UAV) and terrestrial lidar in estimating landscape disturbance impacts within the WUI. Changes in forest vegetation and bare ground cover in WUIs are particularly challenging to estimate with coarse-resolution satellite images due to fine-scale landscape processes and changes that often result in mixed pixels. MethodsUsing Sentinel-2 satellite images, we document forest fire impacts and burn severity. Using 2016 and 2021 UAV multispectral images and Structure-from-Motion data, we estimate post-thinning changes in forest canopy cover, patch sizes, canopy height distribution, and bare ground cover. Using repeat lidar data within a smaller area of the watershed, we quantify geomorphic effects in the WUI associated with the fire and subsequent flooding. ResultsWe document that thinning significantly reduced forest canopy cover, patch size, tree density, and mean canopy height resulting in substantially reduced active crown fire risks in the future. However, the thinning equipment ignited a forest fire, which burned the WUI at varying severity at the top of the watershed that drains into the city. Moderate-high severity burns occurred within 3 km of downtown Flagstaff threatening the WUI neighborhoods and the city. The upstream burned area then experienced 100-year and 200–500-year rainfall events, which resulted in large runoff-driven floods and sedimentation in the city. ConclusionWe demonstrate that UAV high resolution images and photogrammetry combined with terrestrial lidar data provide detailed and accurate estimates of forest thinning and post-fire flood impacts, which could not be estimated from coarser-resolution satellite images. Communities around the world may need to prepare their WUIs for catastrophic fires and increase capacity to manage sediment-laden stormwater since both fires and extreme weather events are projected to increase. 
    more » « less
  4. null (Ed.)
    Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D’s versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy. 
    more » « less
  5. null (Ed.)
    This paper addresses the problem of learning to complete a scene's depth from sparse depth points and images of indoor scenes. Specifically, we study the case in which the sparse depth is computed from a visual-inertial simultaneous localization and mapping (VI-SLAM) system. The resulting point cloud has low density, it is noisy, and has nonuniform spatial distribution, as compared to the input from active depth sensors, e.g., LiDAR or Kinect. Since the VI-SLAM produces point clouds only over textured areas, we compensate for the missing depth of the low-texture surfaces by leveraging their planar structures and their surface normals which is an important intermediate representation. The pre-trained surface normal network, however, suffers from large performance degradation when there is a significant difference in the viewing direction (especially the roll angle) of the test image as compared to the trained ones. To address this limitation, we use the available gravity estimate from the VI-SLAM to warp the input image to the orientation prevailing in the training dataset. This results in a significant performance gain for the surface normal estimate, and thus the dense depth estimates. Finally, we show that our method outperforms other state-of-the-art approaches both on training (ScanNet [1] and NYUv2 [2]) and testing (collected with Azure Kinect [3]) datasets. 
    more » « less