skip to main content


Title: LabelBee: a web platform for large-scale semi-automated analysis of honeybee behavior from video
The LabelBee system is a web application designed to facilitate the collection, annotation and analysis of large amounts of honeybee behavior data from video monitoring. It is developed as part of NSF BIGDATA project “Large-scale multi-parameter analysis of honeybee behavior in their natural habitat”, where we analyze continuous video of the entrance of bee colonies. Due to the large volume of data and its complexity, LabelBee provides advanced Artificial Intelligence and visualization capabilities to enable the construction of good quality datasets necessary for the discovery of complex behavior patterns. It integrates several levels of information: raw video, honeybee positions, decoded tags, individual trajectories and behavior events (entrance/exit, presence of pollen, fanning, etc.). This integration enables the combination of manual and automatic processing by the biologist end-users, who also share and correct their annotation through a centralized server. These annotations are used by the Computer Scientists to create new automatic models, and improve the quality of the automatic modules. The data constructed by this semi-automatized approach can then be exported for the analytic part, which is taking place on the same server using Jupyter notebooks for the extraction and exploration of behavior patterns.  more » « less
Award ID(s):
1633184 1707355
NSF-PAR ID:
10176778
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of Artificial Intelligence for Data Discovery and Reuse (AIDR’19)
Page Range / eLocation ID:
1 to 4
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to automatize the analysis of video for monitoring animals and insects is of great interest for behavior science and ecology. In particular, honeybees play a crucial role in agriculture as natural pollinators. However, recent studies has shown that phenomena such as colony collapse disorder are causing the loss of many colonies. Due to the high number of interacting factors to explain these events, a multi-faceted analysis of the bees in their environment is required. We focus in our work in developing tools to help model and understand their behavior as individuals, in relation with the health and performance of the colony. In this paper, we report the development of a new system for the detection, localization and tracking of honeybee body parts from video on the entrance ramp of the colony. The proposed system builds on the recent advances in Convolutional Neural Networks (CNN) for Human pose estimation and evaluates the suitability for the detection of honeybee pose as shown in Figure 1. This opens the door for novel animal behavior analysis systems that take advantage of the precise detection and tracking of the insect pose. 
    more » « less
  2. We present a novel system for the automatic video monitoring of honey bee foraging activity at the hive entrance. This monitoring system is built upon convolutional neural networks that perform multiple animal pose estimation without the need for marking. This precise detection of honey bee body parts is a key element of the system to provide detection of entrance and exit events at the entrance of the hive including accurate pollen detection. A detailed evaluation of the quality of the detection and a study of the effect of the parameters are presented. The complete system also integrates identification of barcode marked bees, which enables the monitoring at both aggregate and individual levels. The results obtained on multiple days of video recordings show the applicability of the approach for large-scale deployment. This is an important step forward for the understanding of complex behaviors exhibited by honey bees and the automatic assessment of colony health. 
    more » « less
  3. null (Ed.)
    Future view prediction for a 360-degree video streaming system is important to save the network bandwidth and improve the Quality of Experience (QoE). Historical view data of a single viewer and multiple viewers have been used for future view prediction. Video semantic information is also useful to predict the viewer's future behavior. However, extracting video semantic information requires powerful computing hardware and large memory space to perform deep learning-based video analysis. It is not a desirable condition for most of client devices, such as small mobile devices or Head Mounted Display (HMD). Therefore, we develop an approach where video semantic analysis is executed on the media server, and the analysis results are shared with clients via the Semantic Flow Descriptor (SFD) and View-Object State Machine (VOSM). SFD and VOSM become new descriptive additions of the Media Presentation Description (MPD) and Spatial Relation Description (SRD) to support 360-degree video streaming. Using the semantic-based approach, we design the Semantic-Aware View Prediction System (SEAWARE) to improve the overall view prediction performance. The evaluation results of 360-degree videos and real HMD view traces show that the SEAWARE system improves the view prediction performance and streams high-quality video with limited network bandwidth. 
    more » « less
  4. The American Sign Language Linguistic Research Project (ASLLRP) provides Internet access to high-quality ASL video data, generally including front and side views and a close-up of the face. The manual and non-manual components of the signing have been linguistically annotated using SignStream(R). The recently expanded video corpora can be browsed and searched through the Data Access Interface (DAI 2) we have designed; it is possible to carry out complex searches. The data from our corpora can also be downloaded; annotations are available in an XML export format. We have also developed the ASLLRP Sign Bank, which contains almost 6,000 sign entries for lexical signs, with distinct English-based glosses, with a total of 41,830 examples of lexical signs (in addition to about 300 gestures, over 1,000 fingerspelled signs, and 475 classifier examples). The Sign Bank is likewise accessible and searchable on the Internet; it can also be accessed from within SignStream(R) (software to facilitate linguistic annotation and analysis of visual language data) to make annotations more accurate and efficient. Here we describe the available resources. These data have been used for many types of research in linguistics and in computer-based sign language recognition from video; examples of such research are provided in the latter part of this article. 
    more » « less
  5. In this paper, the recognition of pollen bearing honey bees from videos of the entrance of the hive is presented. This computer vision task is a key component for the automatic monitoring of honeybees in order to obtain large scale data of their foraging behavior and task specialization. Several approaches are considered for this task, including baseline classifiers, shallow Convolutional Neural Networks, and deeper networks from the literature. The experimental comparison is based on a new dataset of images of honeybees that was manually annotated for the presence of pollen. The proposed approach, based on Convolutional Neural Networks is shown to outperform the other approaches in terms of accuracy. Detailed analysis of the results and the influence of the architectural parameters, such as the impact of dedicated color based data augmentation, provide insights into how to apply the approach to the target application. 
    more » « less