Title: Learned Visual Navigation for Under-Canopy Agricultural Robots
This paper describes a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km. more »« less
Ozkan-Aydin, Yasemin; Goldman, Daniel I.
(, Science Robotics)
null
(Ed.)
Swarms of ground-based robots are presently limited to relatively simple environments, which we attribute in part to the lack of locomotor capabilities needed to traverse complex terrain. To advance the field of terradynamically capable swarming robotics, inspired by the capabilities of multilegged organisms, we hypothesize that legged robots consisting of reversibly chainable modular units with appropriate passive perturbation management mechanisms can perform diverse tasks in variable terrain without complex control and sensing. Here, we report a reconfigurable swarm of identical low-cost quadruped robots (with directionally flexible legs and tail) that can be linked on demand and autonomously. When tasks become terradynamically challenging for individuals to perform alone, the individuals suffer performance degradation. A systematic study of performance of linked units leads to new discoveries of the emergent obstacle navigation capabilities of multilegged robots. We also demonstrate the swarm capabilities through multirobot object transport. In summary, we argue that improvement capabilities of terrestrial swarms of robots can be achieved via the judicious interaction of relatively simple units.
Velikova, Mariya; Fernandez-Diaz, Juan; Glennie, Craig
(, ISPRS Open Journal of Photogrammetry and Remote Sensing)
The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.ICESat-2LidarPoint cloudNoise filtering
Urbazaev, Mikhail; Hess, Laura L; Hancock, Steven; Sato, Luciane Yumie; Ometto, Jean Pierre; Thiel, Christian; Dubois, Clémence; Heckel, Kai; Urban, Marcel; Adam, Markus; et al
(, Science of Remote Sensing)
Accurate measurements of terrain elevation are crucial for many ecological applications. In this study, we sought to assess new global three-dimensional Earth observation data acquired by the spaceborne Light Detection and Ranging (LiDAR) missions Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) and Global Ecosystem Dynamics Investigation (GEDI). For this, we examined the “ATLAS/ICESat-2 L3A Land and Vegetation Height”, version 5 (20 × 14 m and 100 × 14 m segments) and the “GEDI Level 2A Footprint Elevation and Height Metrics”, version 2 (25 m circle). We conducted our analysis across four land cover classes (bare soil, herbaceous, forest, savanna), and six forest types (temperate broad-leaved, temperate needle-leaved, temperate mixed, tropical upland, tropical floodplain, and tropical secondary forest). For assessment of terrain elevation estimates from spaceborne LiDAR data we used high resolution airborne data. Our results indicate that both LiDAR missions provide accurate terrain elevation estimates across different land cover classes and forest types with mean error less than 1 m, except in tropical forests. However, using a GEDI algorithm with a lower signal end threshold (e.g., algorithm 5) can improve the accuracy of terrain elevation estimates for tropical upland forests. Specific environmental parameters (terrain slope, canopy height and canopy cover) and sensor parameters (GEDI degrade flags, terrain estimation algorithm; ICESat-2 number of terrain photons, terrain uncertainty) can be applied to improve the accuracy of ICESat-2 and GEDI-based terrain estimates. Although the goodness-of-fit statistics from the two spaceborne LiDARs are not directly comparable since they possess different footprint sizes (100 × 14 m segment or 20 × 14 m segment vs. 25 m circle), we observed similar trends on the impact of terrain slope, canopy cover and canopy height for both sensors. Terrain slope strongly impacts the accuracy of both ICESat-2 and GEDI terrain elevation estimates for both forested and non-forested areas. In the case of GEDI the impact of slope is, however, partly caused by horizontal geolocation error. Moreover, dense canopies (i.e., canopy cover higher than 90%) affect the accuracy of spaceborne LiDAR terrain estimates, while canopy height does not, when considering samples over flat terrains. Our analysis of the accuracy and precision of current versions of spaceborne LiDAR products for different vegetation types and environmental conditions provides insights on parameter selection and estimated uncertainty to inform users of these key global datasets.
Jones, Benjamin; Ward Jones, Melissa
(, NSF Arctic Data Center)
The University of Alaska Fairbanks T Field is a legacy farm field that is part of the National Science Foundation (NSF) Funded Permafrost Grown project. We are studying the long-term effects of permafrost thaw following initial clearing for cultivation purposes. In this regard, we have acquired very high resolution light detection and ranging (LiDAR) data and digital photography from a DJI M300 drone using a Zenmuse L1 and a MicaSense RedEdge-P camera. The Zenmuse L1 integrates a Livox Lidar module, a high-accuracy inertial measurement units (IMU), and a camera with a 1-inch CMOS on a 3-axis stabilized gimbal. The MicaSense RedEdge-P camera has five multispectral bands and a high-resolution panchromatic band. The drone was configured to fly in real-time kinematic (RTK) mode at an altitude of 60 meters above ground level using the DJI D-RTK 2 base station. Data was acquired using a 50% sidelap and a 70% frontlap for the Zenmuse L1 and an 80% sidelap and a 75% frontlap for the MicaSense. Additional ground control was established with a Leica GS18 global navigation satellite system (GNSS) and all data have been post-processed to World Geodetic System 1984 (WGS84) universal transverse mercator (UTM) Zone 6 North using ellipsoid heights. Data outputs include a two-class-classified LiDAR point cloud, digital surface model, digital terrain model, an orthophoto mosaic, and a multispectral orthoimage consisting of five bands. Image acquisition occurred on 18 August 2023.
Guillaume Sartoretti, Samuel Shaw
(, IEEE International Conference on Robotics and Automation)
: Inspired by the locomotor nervous system of vertebrates, central pattern generator (CPG) models can be used to design gaits for articulated robots, such as crawling, swimming or legged robots. Incorporating sensory feedback for gait adaptation in these models can improve the locomotive performance of such robots in challenging terrain. However, many CPG models to date have been developed exclusively for open-loop gait generation for traversing level terrain. In this paper, we present a novel approach for incorporating inertial feedback into the CPG framework for the control of body posture during legged locomotion on steep, unstructured terrain. That is, we adapt the limit cycle of each leg of the robot with time to simultaneously produce locomotion and body posture control. We experimentally validate our approach on a hexapod robot, locomoting in a variety of steep, challenging terrains (grass, rocky slide, stairs). We show how our approach can be used to level the robot's body, allowing it to locomote at a relatively constant speed, even as terrain steepness and complexity prevents the use of an open-loop control strategy.
Sivakumar, Arun, Modi, Sahil, Gasparino, Mateus, Ellis, Che, Velasquez, Andres, Chowdhary, Girish, and Gupta, Saurabh.
"Learned Visual Navigation for Under-Canopy Agricultural Robots". Robotics: Science and Systems (RSS) (). Country unknown/Code not available. https://par.nsf.gov/biblio/10416327.
@article{osti_10416327,
place = {Country unknown/Code not available},
title = {Learned Visual Navigation for Under-Canopy Agricultural Robots},
url = {https://par.nsf.gov/biblio/10416327},
abstractNote = {This paper describes a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.},
journal = {Robotics: Science and Systems (RSS)},
author = {Sivakumar, Arun and Modi, Sahil and Gasparino, Mateus and Ellis, Che and Velasquez, Andres and Chowdhary, Girish and Gupta, Saurabh},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.