- Award ID(s):
- 2102120
- PAR ID:
- 10412983
- Date Published:
- Journal Name:
- Frontiers in Plant Science
- Volume:
- 14
- ISSN:
- 1664-462X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Abstract Real-time execution of machine learning (ML) pipelines on radiology images is difficult due to limited computing resources in clinical environments, whereas running them in research clusters requires efficient data transfer capabilities. We developed Niffler, an open-source Digital Imaging and Communications in Medicine (DICOM) framework that enables ML and processing pipelines in research clusters by efficiently retrieving images from the hospitals’ PACS and extracting the metadata from the images. We deployed Niffler at our institution (Emory Healthcare, the largest healthcare network in the state of Georgia) and retrieved data from 715 scanners spanning 12 sites, up to 350 GB/day continuously in real-time as a DICOM data stream over the past 2 years. We also used Niffler to retrieve images bulk on-demand based on user-provided filters to facilitate several research projects. This paper presents the architecture and three such use cases of Niffler. First, we executed an IVC filter detection and segmentation pipeline on abdominal radiographs in real-time, which was able to classify 989 test images with an accuracy of 96.0%. Second, we applied the Niffler Metadata Extractor to understand the operational efficiency of individual MRI systems based on calculated metrics. We benchmarked the accuracy of the calculated exam time windows by comparing Niffler against the Clinical Data Warehouse (CDW). Niffler accurately identified the scanners’ examination timeframes and idling times, whereas CDW falsely depicted several exam overlaps due to human errors. Third, with metadata extracted from the images by Niffler, we identified scanners with misconfigured time and reconfigured five scanners. Our evaluations highlight how Niffler enables real-time ML and processing pipelines in a research cluster.more » « less
-
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields.
-
Batteryless image sensors present an opportunity for long-life, long-range sensor deployments that require zero maintenance, and have low cost. Such deployments are critical for enabling remote sensing applications, e.g., instrumenting national highways, where individual devices are deployed far (kms away) from supporting infrastructure. In this work, we develop and characterize Camaroptera, the first batteryless image-sensing platform to combine energy-harvesting with active, long-range (LoRa) communication. We also equip Camaroptera with a Machine Learning-based processing pipeline to mitigate costly, long-distance communication of image data. This processing pipeline filters out uninteresting images and only transmits the images interesting to the application. We show that compared to running a traditional Sense-and-Send workload, Camaroptera’s Local Inference pipeline captures and sends upto \( 12\times \) more images of interest to an application. By performing Local Inference , Camaroptera also sends upto \( 6.5\times \) fewer uninteresting images, instead using that energy to capture upto \( 14.7\times \) more new images, increasing its sensing effectiveness and availability. We fully prototype the Camaroptera hardware platform in a compact, 2 cm \( \times \) 3 cm \( \times \) 5 cm volume. Our evaluation demonstrates the viability of a batteryless, remote, visual-sensing platform in a small package that collects and usefully processes acquired data and transmits it over long distances (kms), while being deployed for multiple decades with zero maintenance.more » « less
-
Abstract As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction.
-
Phenomics requires quantification of large volumes of image data, necessitating high throughput image processing approaches. Existing image processing pipelines for Drosophila wings, a powerful genetic model for studying the underlying genetics for a broad range of cellular and developmental processes, are limited in speed, precision, and functional versatility. To expand on the utility of the wing as a phenotypic screening system, we developed MAPPER, an automated machine learning-based pipeline that quantifies high-dimensional phenotypic signatures, with each dimension quantifying a unique morphological feature of the Drosophila wing. MAPPER magnifies the power of Drosophila phenomics by rapidly quantifying subtle phenotypic differences in sample populations. We benchmarked MAPPER’s accuracy and precision in replicating manual measurements to demonstrate its widespread utility. The morphological features extracted using MAPPER reveal variable sexual dimorphism across Drosophila species and unique underlying sex-specific differences in morphogen signaling in male and female wings. Moreover, the length of the proximal-distal axis across the species and sexes shows a conserved scaling relationship with respect to the wing size. In sum, MAPPER is an open-source tool for rapid, high-dimensional analysis of large imaging datasets. These high-content phenomic capabilities enable rigorous and systematic identification of genotype-to-phenotype relationships in a broad range of screening and drug testing applications and amplify the potential power of multimodal genomic approaches.more » « less