skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Terrestrial Lidar Point Cloud Data for: Evaporation and condensation dynamics within saturated epiphyte communities in a Quercus virginiana forest
Terrestrial lidar scans were captured using a BLK360 scanner (Leica Geosystems, Norcross, GA, USA) which has a range of 0.5 – 45 m and measurement rate up to 680,000 points s−1 at the high-resolution setting. A georeferenced, 3-D point cloud of the study site was generated from 12 scans, approximately 50 m apart in both horizontal directions. Scans were performed in orientations intended to maximize branch exposure to the scanner and to scan during optimal weather conditions to minimize occlusion of features due to noise or movement generated by wind. Scan co-registration was done in Leica Geosystem’s Cyclone Register 360 software using its Visual Simultaneous Localization and Mapping algorithm (Visual SLAM) and resulted in relatively low overall co-registration error ranging from 0.005-0.009 m. From this study site point cloud, manual straight-line measurements from the ground to the sensors were made using Leica’s Cyclone Register 360 software.  more » « less
Award ID(s):
1954538
PAR ID:
10539635
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Zenodo
Date Published:
Subject(s) / Keyword(s):
Epiphytes Rainfall Condensation Dew Evaporation Interception Lidar
Format(s):
Medium: X
Right(s):
Creative Commons Attribution 4.0 International
Sponsoring Org:
National Science Foundation
More Like this
  1. The process of generating a comprehensive point cloud from the raw data collected at Mayfield involved three distinct steps. Firstly, Pix4D was utilized to process and analyze the data. This was followed by the utilization of Register 360 to further refine and align the collected data. Finally, Cyclone was used to complete the point cloud generation process, ensuring that the resulting point cloud was as detailed and accurate as possible. The combination of these three steps allowed for the creation of a comprehensive point cloud that could be utilized for a variety of applications, ranging from surveying and mapping to construction and design.Damage reconnaissance of historical buildings affected by tornado loading 
    more » « less
  2. In recent years, LiDAR sensors have become pervasive in the solutions to localization tasks for autonomous systems. One key step in using LiDAR data for localization is the alignment of two LiDAR scans taken from different poses, a process called scan-matching or point cloud registration. Most existing algorithms for this problem are heuristic in nature and local, meaning they may not produce accurate results under poor initialization. Moreover, existing methods give no guarantee on the quality of their output, which can be detrimental for safety-critical tasks. In this paper, we analyze a simple algorithm for point cloud registration, termed PASTA. This algorithm is global and does not rely on point-to-point correspondences, which are typically absent in LiDAR data. Moreover, and to the best of our knowledge, we offer the first point cloud registration algorithm with provable error bounds. Finally, we illustrate the proposed algorithm and error bounds in simulation on a simple trajectory tracking task. 
    more » « less
  3. 3D scan registration is a classical, yet a highly useful problem in the context of 3D sensors such as Kinect and Velodyne. While there are several existing methods, the techniques are usually incremental where adjacent scans are registered first to obtain the initial poses, followed by motion averaging and bundle-adjustment refinement. In this paper, we take a different approach and develop minimal solvers for jointly computing the initial poses of cameras in small loops such as 3-, 4-, and 5-cycles1. Note that the classical registration of 2 scans can be done using a minimum of 3 point matches to compute 6 degrees of relative motion. On the other hand, to jointly compute the 3D reg- istrations in n-cycles, we take 2 point matches between the first n−1 consecutive pairs (i.e., Scan 1 & Scan 2, . . . , and Scan n − 1 & Scan n) and 1 or 2 point matches between Scan 1 and Scan n. Overall, we use 5, 7, and 10 point matches for 3-, 4-, and 5-cycles, and recover 12, 18, and 24 degrees of transformation variables, respectively. Using simulations and real-data we show that the 3D registration using mini n-cycles are computationally efficient, and can provide alternate and better initial poses compared to standard pairwise methods. 
    more » « less
  4. Ishikawa, H.; Liu, CL.; Pajdla, T.; Shi, J. (Ed.)
    We propose a novel technique to register sparse 3D scans in the absence of texture. While existing methods such as KinectFusion or Iterative Closest Points (ICP) heavily rely on dense point clouds, this task is particularly challenging under sparse conditions without RGB data. Sparse texture-less data does not come with high-quality boundary signal, and this prohibits the use of correspondences from corners, junctions, or boundary lines. Moreover, in the case of sparse data, it is incorrect to assume that the same point will be captured in two consecutive scans. We take a different approach and first re-parameterize the point-cloud using a large number of line segments. In this re-parameterized data, there exists a large number of line intersection (and not correspondence) constraints that allow us to solve the registration task. We propose the use of a two-step alternating projection algorithm by formulating the registration as the simultaneous satisfaction of intersection and rigidity constraints. The proposed approach outperforms other top-scoring algorithms on both Kinect and LiDAR datasets. In Kinect, we can use 100X downsampled sparse data and still outperform competing methods operating on full-resolution data. 
    more » « less
  5. This data was gathered during the Geotechnical Extreme Events Reconnaissance (GEER) efforts following the February 6, 2023, Kahramanmaraş earthquake sequence. This dataset is comprised of terrestrial lidar scan point clouds that aim to capture liquefaction-induced building settlement, building-ground interactions, and ground deformations. The objective of the reconnaissance efforts was to capture perishable data on ground failures and liquefaction-induced infrastructure damage due to these earthquakes. Reconnaissance was performed from March 27 to April 1, 2023 in and around İskenderun, Hatay; Gölbaşı, Adıyaman; and Antakya, Hatay. Lidar scans were performed in İskenderun and Gölbaşı at selected liquefaction building sites. The reconnaissance sites were selected as those where there was evidence of liquefaction (e.g., ejecta) and liquefaction-induced building settlements, as well as building-ground interactions, and site access. The processed lidar data are included as .las point cloud files; raw data are included as .fls files. The point cloud data may be viewed and analyzed in point cloud analysis software, including the opensource software CloudCompare. Additional images of the surveyed buildings are included for reference. An explanation of the data types and structure is found in the README.pdf file. These data may be used to investigate earthquake liquefaction-induced building settlements, building-ground interactions, and liquefaction-induced ground deformations. These data will be of use and interest to engineers and researchers working in the area of liquefaction ground failures and building-ground interactions. Additional information and data from this reconnaissance are available in the GEER reports, which are referenced in the "Related Works" section. 
    more » « less