skip to main content


Title: A Spatial AI-Based Agricultural Robotic Platform for Wheat Detection and Collision Avoidance

To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed.

 
more » « less
Award ID(s):
1826820
NSF-PAR ID:
10501200
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
AI
Volume:
3
Issue:
3
ISSN:
2673-2688
Page Range / eLocation ID:
719 to 738
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This work presents the design and autonomous navigation policy of the Resilient Micro Flyer, a new type of collision-tolerant robot tailored to fly through extremely confined environments and manhole-sized tubes. The robot maintains a low weight (<500g) and implements a combined rigid-compliant design through the integration of elastic flaps around its stiff collision-tolerant frame. These passive flaps ensure compliant collisions, contact sensing and smooth navigation in contact with the environment. Focusing on resilient autonomy, capable of running on resource-constrained hardware, we demonstrate the beneficial role of compliant collisions for the reliability of the onboard visual-inertial odometry and propose a safe navigation policy that exploits both collision-avoidance using lightweight time-of-flight sensing and adaptive control in response to collisions. The robot further realizes an explicit manhole navigation mode that exploits the direct mechanical feedback provided by the flaps and a special navigation strategy to self-align inside manholes with non-straight geometry. Comprehensive experimental studies are presented to evaluate, both individually and as a whole, how resilience is achieved based on the robot design and its navigation scheme. 
    more » « less
  2. In this paper, we propose a controller that stabilizes a holonomic robot with single-integrator dynamics to a target position in a bounded domain, while preventing collisions with convex obstacles. We assume that the robot can measure its own position and heading in a global coordinate frame, as well as its relative position vector to the closest point on each obstacle in its sensing range. The robot has no information about the locations and shapes of the obstacles. We define regions around the boundaries of the obstacles and the domain within which the robot can sense these boundaries, and we associate each region with a virtual potential field that we call a local navigation-like function (NLF), which is only a function of the robot’s position and its distance from the corresponding boundary. We also define an NLF for the remaining free space of the domain, and we identify the critical points of the NLFs. Then, we propose a switching control law that drives the robot along the negative gradient of the NLF for the obstacle that is currently closest, or the NLF for the remaining free space if no obstacle is detected. We derive a conservative upper bound on the tunable parameter of the NLFs that guarantees the absence of locally stable equilibrium points, which can trap the robot, if the obstacles’ boundaries satisfy a minimum curvature condition. We also analyze the convergence and collision avoidance properties of the switching control law and, using a Lyapunov argument, prove that the robot safely navigates around the obstacles and converges asymptotically to the target position. We validate our analytical results for domains with different obstacle configurations by implementing the controller in both numerical simulations and physical experiments with a nonholonomic mobile robot. 
    more » « less
  3. For many types of robots, avoiding obstacles is necessary to prevent damage to the robot and environment. As a result, obstacle avoidance has historically been an im- portant problem in robot path planning and control. Soft robots represent a paradigm shift with respect to obstacle avoidance because their low mass and compliant bodies can make collisions with obstacles inherently safe. Here we consider the benefits of intentional obstacle collisions for soft robot navigation. We develop and experimentally verify a model of robot-obstacle interaction for a tip-extending soft robot. Building on the obstacle interaction model, we develop an algorithm to determine the path of a growing robot that takes into account obstacle collisions. We find that obstacle collisions can be beneficial for open-loop navigation of growing robots because the obstacles passively steer the robot, both reducing the uncertainty of the location of the robot and directing the robot to targets that do not lie on a straight path from the starting point. Our work shows that for a robot with predictable and safe interactions with obstacles, target locations in a cluttered, mapped environment can be reached reliably by simply setting the initial trajectory. This has implications for the control and design of robots with minimal active steering. 
    more » « less
  4. Skateboarding as a method of transportation has become prevalent, which has increased the occurrence and likelihood of pedestrian–skateboarder collisions and near-collision scenarios in shared-use roadway areas. Collisions between pedestrians and skateboarders can result in significant injury. New approaches are needed to evaluate shared-use areas prone to hazardous pedestrian–skateboarder interactions, and perform real-time, in situ (e.g., on-device) predictions of pedestrian–skateboarder collisions as road conditions vary due to changes in land usage and construction. A mechanism called the Surrogate Safety Measures for skateboarder–pedestrian interaction can be computed to evaluate high-risk conditions on roads and sidewalks using deep learning object detection models. In this paper, we present the first ever skateboarder–pedestrian safety study leveraging deep learning architectures. We view and analyze state of the art deep learning architectures, namely the Faster R-CNN and two variants of the Single Shot Multi-box Detector (SSD) model to select the correct model that best suits two different tasks: automated calculation of Post Encroachment Time (PET) and finding hazardous conflict zones in real-time. We also contribute a new annotated data set that contains skateboarder–pedestrian interactions that has been collected for this study. Both our selected models can detect and classify pedestrians and skateboarders correctly and efficiently. However, due to differences in their architectures and based on the advantages and disadvantages of each model, both models were individually used to perform two different set of tasks. Due to improved accuracy, the Faster R-CNN model was used to automate the calculation of post encroachment time, whereas to determine hazardous regions in real-time, due to its extremely fast inference rate, the Single Shot Multibox MobileNet V1 model was used. An outcome of this work is a model that can be deployed on low-cost, small-footprint mobile and IoT devices at traffic intersections with existing cameras to perform on-device inferencing for in situ Surrogate Safety Measurement (SSM), such as Time-To-Collision (TTC) and Post Encroachment Time (PET). SSM values that exceed a hazard threshold can be published to an Message Queuing Telemetry Transport (MQTT) broker, where messages are received by an intersection traffic signal controller for real-time signal adjustment, thus contributing to state-of-the-art vehicle and pedestrian safety at hazard-prone intersections. 
    more » « less
  5. A major challenge in deploying the smallest of Micro Aerial Vehicle (MAV) platforms (< 100 g) is their inability to carry sensors that provide high-resolution metric depth information (e.g., LiDAR or stereo cameras). Current systems rely on end-to-end learning or heuristic approaches that directly map images to control inputs, and struggle to fly fast in unknown environments. In this work, we ask the following question: using only a monocular camera, optical odometry, and offboard computation, can we create metrically accurate maps to leverage the powerful path planning and navigation approaches employed by larger state-of-the-art robotic systems to achieve robust autonomy in unknown environments? We present MonoNav: a fast 3D reconstruction and navigation stack for MAVs that leverages recent advances in depth prediction neural networks to enable metrically accurate 3D scene reconstruction from a stream of monocular images and poses. MonoNav uses off-the-shelf pre-trained monocular depth estimation and fusion techniques to construct a map, then searches over motion primitives to plan a collision-free trajectory to the goal. In extensive hardware experiments, we demonstrate how MonoNav enables the Crazyflie (a 37 g MAV) to navigate fast (0.5 m/s) in cluttered indoor environments. We evaluate MonoNav against a state-of-the-art end-to-end approach, and find that the collision rate in navigation is significantly reduced (by a factor of 4). This increased safety comes at the cost of conservatism in terms of a 22% reduction in goal completion. 
    more » « less