skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Design and Simulation of a Multi-Robot Architecture for Large-Scale Construction Projects
Large-scale construction projects can benefit from having a team of heterogeneous building robots operating autonomously and cooperatively on unstructured environments. In this work, we propose a flexible system architecture, MARSala, that allows teams of distributed mobile robots to construct motion support structures in large and unstructured environments using purely local interactions. The paper primarily focuses on the deliberative layer of the architecture which provides a means for formulating a construction project as a motion support structure construction problem. We implemented the architecture in simulation and demonstrated the benefits of such a formulation in two different construction scenarios operating in large unstructured environments.  more » « less
Award ID(s):
2054744 1846340
PAR ID:
10321604
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2021 International Symposium on Multi-Robot and Multi-Agent Systems (MRS)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The ability to autonomously modify their environment dramatically increases the capability of robots to operate in unstructured environments. We develop a specialized construction algorithm and robotic system that can autonomously build motion support structures with previously unseen objects. The approach is based on our prior work on adaptive ramp building algorithms, but it eliminates the assumption of having specialized building materials that simplify manipulation and planning for stability. Utilizing irregularly shaped stones makes the problem significantly more challenging since the outcome of individual placements is sensitive to details of contact geometry and friction, which are difficult to observe. To reuse the same high-level algorithm, we develop a new physics-based planner that explicitly considers the uncertainty produced by incomplete in-situ sensing and imprecision during pickup and placement. We demonstrate the approach on a robotic system that uses a newly developed gripper to reliably pick up stones with minimal additional sensors or complex grasp planning. The resulting system can build structures with more than 70 stones, which in turn provide traversable paths to previously inaccessible locations. 
    more » « less
  2. Construction robots have drawn increased attention as a potential means of improving construction safety and productivity. However, it is still challenging to ensure safe human-robot collaboration on dynamic and unstructured construction workspaces. On construction sites, multiple entities dynamically collaborate with each other and the situational context between them evolves continually. Construction robots must therefore be equipped to visually understand the scene’s contexts (i.e., semantic relations to surrounding entities), thereby safely collaborating with humans, as a human vision system does. Toward this end, this study builds a unique deep neural network architecture and develops a construction-specialized model by experimenting multiple fine-tuning scenarios. Also, this study evaluates its performance on real construction operations data in order to examine its potential toward real-world applications. The results showed the promising performance of the tuned model: the recall@5 on training and validation dataset reached 92% and 67%, respectively. The proposed method, which supports construction co-robots with the holistic scene understanding, is expected to contribute to promoting safer human-robot collaboration in construction. 
    more » « less
  3. Robots operating in unstructured environments must localize contact to detect and recover from failure. For example, Fig. 1 shows a Minitaur robot that must localize where it has unexpectedly contacted the stair’s edge so that it can properly step over it. We propose a kinematic method for proprioceptive contact localization using velocity measurements. The method is validated on two planar robots, the quadrupedal Minitaur and the DD Hand gripper, and compared to other state of the art proprioceptive methods. We further show that the method can be extended to spatial robots by fusing the candidate contact points over time with a particle filter. 
    more » « less
  4. Perceiving the position and orientation of objects (i.e., pose estimation) is a crucial prerequisite for robots acting within their natural environment. We present a hardware acceleration approach to enable real-time and energy efficient articulated pose estimation for robots operating in unstructured environments. Our hardware accelerator implements Nonparametric Belief Propagation (NBP) to infer the belief distribution of articulated object poses. Our approach is on average, 26× more energy efficient than a high-end GPU and 11× faster than an embedded low-power GPU implementation. Moreover, we present a Monte-Carlo Perception Library generated from high-level synthesis to enable reconfigurable hardware designs on FPGA fabrics that are better tuned to user-specified scene, resource, and performance constraints. 
    more » « less
  5. null (Ed.)
    The robotics community continually strives to create robots that are deployable in real-world environments. Often, robots are expected to interact with human groups. To achieve this goal, we introduce a new method, the Robot-Centric Group Estimation Model (RoboGEM), which enables robots to detect groups of people. Much of the work reported in the literature focuses on dyadic interactions, leaving a gap in our understanding of how to build robots that can effectively team with larger groups of people. Moreover, many current methods rely on exocentric vision, where cameras and sensors are placed externally in the environment, rather than onboard the robot. Consequently, these methods are impractical for robots in unstructured, human-centric environments, which are novel and unpredictable. Furthermore, the majority of work on group perception is supervised, which can inhibit performance in real-world settings. RoboGEM addresses these gaps by being able to predict social groups solely from an egocentric perspective using color and depth (RGB-D) data. To achieve group predictions, RoboGEM leverages joint motion and proximity estimations. We evaluated RoboGEM against a challenging, egocentric, real-world dataset where both pedestrians and the robot are in motion simultaneously, and show RoboGEM outperformed two state-of-the-art supervised methods in detection accuracy by up to 30%, with a lower miss rate. Our work will be helpful to the robotics community, and serve as a milestone to building unsupervised systems that will enable robots to work with human groups in real-world environments. 
    more » « less