skip to main content

Title: Hierarchically Organized Computer Vision in Support of Multi-Faceted Search for Missing Persons
Missing person searches are typically initiated with a description of a person that includes their age, race, clothing, and gender, possibly supported by a photo. Unmanned Aerial Systems (sUAS) imbued with Computer Vision (CV) capabilities, can be deployed to quickly search an area to find the missing person; however, the search task is far more difficult when a crowd of people is present, and only the person described in the missing person report must be identified. It is particularly challenging to perform this task on the potentially limited resources of an sUAS. We therefore propose AirSight, as a new model that hierarchically combines multiple CV models, exploits both onboard and off-board computing capabilities, and engages humans interactively in the search. For illustrative purposes, we use AirSight to show how a person's image, extracted from an aerial video can be matched to a basic description of the person. Finally, as a work-in-progress paper, we describe ongoing efforts in building an aerial dataset of partially occluded people and physically deploying AirSight on our sUAS.  more » « less
Award ID(s):
Author(s) / Creator(s):
Publisher / Repository:
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)
Date Published:
Page Range / eLocation ID:
1 to 7
Subject(s) / Keyword(s):
["aerial search","drones","computer vision"]
Medium: X
Waikoloa Beach, HI, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Computer Vision (CV) is used in a broad range of Cyber-Physical Systems such as surgical and factory floor robots and autonomous vehicles including small Unmanned Aerial Systems (sUAS). It enables machines to perceive the world by detecting and classifying objects of interest, reconstructing 3D scenes, estimating motion, and maneuvering around objects. CV algorithms are developed using diverse machine learning and deep learning frameworks, which are often deployed on limited resource edge devices. As sUAS rely upon an accurate and timely perception of their environment to perform critical tasks, problems related to CV can create hazardous conditions leading to crashes or mission failure. In this paper, we perform a systematic literature review (SLR) of CV-related challenges associated with CV, hardware, and software engineering. We then group the reported challenges into five categories and fourteen sub-challenges and present existing solutions. As current literature focuses primarily on CV and hardware challenges, we close by discussing implications for Software Engineering, drawing examples from a CV-enhanced multi-sUAS system. 
    more » « less
  2. Abstract

    Thousands of people are reported lost in the wilderness in the United States every year and locating these missing individuals as rapidly as possible depends on coordinated search and rescue (SAR) operations. As time passes, the search area grows, survival rate decreases, and searchers are faced with an increasingly daunting task of searching large areas in a short amount of time. To optimize the search process, mathematical models of lost person behavior with respect to landscape can be used in conjunction with current SAR practices. In this paper, we introduce an agent-based model of lost person behavior which allows agents to move on known landscapes with behavior defined as independent realizations of a random variable. The behavior random variable selects from a distribution of six known lost person reorientation strategies to simulate the agent’s trajectory. We systematically simulate a range of possible behavior distributions and find a best-fit behavioral profile for a hiker with the International Search and Rescue Incident Database. We validate these results with a leave-one-out analysis. This work represents the first time-discrete model of lost person dynamics validated with data from real SAR incidents and has the potential to improve current methods for wilderness SAR.

    more » « less
  3. The emerging sector of offshore kelp aquaculture represents an opportunity to produce biofuel feedstock to help meet growing energy demand. Giant kelp represents an attractive aquaculture crop due to its rapid growth and production, however precision farming over large scales is required to make this crop economically viable. These demands necessitate high frequency monitoring to ensure outplant success, maximum production, and optimum quality of harvested biomass, while the long distance from shore and large necessary scales of production makes in person monitoring impractical. Remote sensing offers a practical monitoring solution and nascent imaging technologies could be leveraged to provide daily products of the kelp canopy and subsurface structures over unprecedented spatial scales. Here, we evaluate the efficacy of remote sensing from satellites and aerial and underwater autonomous vehicles as potential monitoring platforms for offshore kelp aquaculture farms. Decadal-scale analyses of the Southern California Bight showed that high offshore summertime cloud cover restricts the ability of satellite sensors to provide high frequency direct monitoring of these farms. By contrast, daily monitoring of offshore farms using sensors mounted to aerial and underwater drones seems promising. Small Unoccupied Aircraft Systems (sUAS) carrying lightweight optical sensors can provide estimates of canopy area, density, and tissue nitrogen content on the time and space scales necessary for observing changes in this highly dynamic species. Underwater color imagery can be rapidly classified using deep learning models to identify kelp outplants on a longline farm and high acoustic returns of kelp pneumatocysts from side scan sonar imagery signal an ability to monitor the subsurface development of kelp fronds. Current sensing technologies can be used to develop additional machine learning and spectral algorithms to monitor outplant health and canopy macromolecular content, however future developments in vehicle and infrastructure technologies are necessary to reduce costs and transcend operational limitations for continuous deployment in an offshore setting. 
    more » « less
  4. Identifying people in photographs is a critical task in a wide variety of domains, from national security [7] to journalism [14] to human rights investigations [1]. The task is also fundamentally complex and challenging. With the world population at 7.6 billion and growing, the candidate pool is large. Studies of human face recognition ability show that the average person incorrectly identifies two people as similar 20–30% of the time, and trained police detectives do not perform significantly better [11]. Computer vision-based face recognition tools have gained considerable ground and are now widely available commercially, but comparisons to human performance show mixed results at best [2,10,16]. Automated face recognition techniques, while powerful, also have constraints that may be impractical for many real-world contexts. For example, face recognition systems tend to suffer when the target image or reference images have poor quality or resolution, as blemishes or discolorations may be incorrectly recognized as false positives for facial landmarks. Additionally, most face recognition systems ignore some salient facial features, like scars or other skin characteristics, as well as distinctive non-facial features, like ear shape or hair or facial hair styles. This project investigates how we can overcome these limitations to support person identification tasks. By adjusting confidence thresholds, users of face recognition can generally expect high recall (few false negatives) at the cost of low precision (many false positives). Therefore, we focus our work on the “last mile” of person identification, i.e., helping a user find the correct match among a large set of similarlooking candidates suggested by face recognition. Our approach leverages the powerful capabilities of the human vision system and collaborative sensemaking via crowdsourcing to augment the complementary strengths of automatic face recognition. The result is a novel technology pipeline combining collective intelligence and computer vision. We scope this project to focus on identifying soldiers in photos from the American Civil War era (1861– 1865). An estimated 4,000,000 soldiers fought in the war, and most were photographed at least once, due to decreasing costs, the increasing robustness of the format, and the critical events separating friends and family [17]. Over 150 years later, the identities of most of these portraits have been lost, but as museums and archives increasingly digitize and publish their collections online, the pool of reference photos and information has never been more accessible. Historians, genealogists, and collectors work tirelessly to connect names with faces, using largely manual identification methods [3,9]. Identifying people in historical photos is important for preserving material culture [9], correcting the historical record [13], and recognizing contributions of marginalized groups [4], among other reasons. 
    more » « less
  5. Robots such as unmanned aerial vehicles (UAVs) deployed for search and rescue (SAR) can explore areas where human searchers cannot easily go and gather information on scales that can transform SAR strategy. Multi-UAV teams therefore have the potential to transform SAR by augmenting the capabilities of human teams and providing information that would otherwise be inaccessible. Our research aims to develop new theory and technologies for field deploying autonomous UAVs and managing multi-UAV teams working in concert with multi-human teams for SAR. Specifically, in this paper we summarize our work in progress towards these goals, including: (1) a multi-UAV search path planner that adapts to human behavior; (2) an in-field distributed computing prototype that supports multi-UAV computation and communication; (3) behavioral modeling that fields spatially localized predictions of lost person location; and (4) an interface between human searchers and UAVs that facilitates human-UAV interaction over a wide range of autonomy. 
    more » « less