Abstract BackgroundPlant architecture can influence crop yield and quality. Manual extraction of architectural traits is, however, time-consuming, tedious, and error prone. The trait estimation from 3D data addresses occlusion issues with the availability of depth information while deep learning approaches enable learning features without manual design. The goal of this study was to develop a data processing workflow by leveraging 3D deep learning models and a novel 3D data annotation tool to segment cotton plant parts and derive important architectural traits. ResultsThe Point Voxel Convolutional Neural Network (PVCNN) combining both point- and voxel-based representations of 3D data shows less time consumption and better segmentation performance than point-based networks. Results indicate that the best mIoU (89.12%) and accuracy (96.19%) with average inference time of 0.88 s were achieved through PVCNN, compared to Pointnet and Pointnet++. On the seven derived architectural traits from segmented parts, an R2value of more than 0.8 and mean absolute percentage error of less than 10% were attained. ConclusionThis plant part segmentation method based on 3D deep learning enables effective and efficient architectural trait measurement from point clouds, which could be useful to advance plant breeding programs and characterization of in-season developmental traits. The plant part segmentation code is available athttps://github.com/UGA-BSAIL/plant_3d_deep_learning.
more »
« less
Camera-view supervision for bird's-eye-view semantic segmentation
Bird's-eye-view Semantic Segmentation (BEVSS) is a powerful and crucial component of planning and control systems in many autonomous vehicles. Current methods rely on end-to-end learning to train models, leading to indirectly supervised and inaccurate camera-to-BEV projections. We propose a novel method of supervising feature extraction with camera-view depth and segmentation information, which improves the quality of feature extraction and projection in the BEVSS pipeline. Our model, evaluated on the nuScenes dataset, shows a 3.8% improvement in Intersection-over-Union (IoU) for vehicle segmentation and a 30-fold reduction in depth error compared to baselines, while maintaining competitive inference times of 32 FPS. This method offers more accurate and reliable BEVSS for real-time autonomous driving systems. The codes and implementation details and code can be found athttps://github.com/bluffish/sucam.
more »
« less
- Award ID(s):
- 2107449
- PAR ID:
- 10639139
- Publisher / Repository:
- Frontiers Media S.A
- Date Published:
- Journal Name:
- Frontiers in Big Data
- Volume:
- 7
- ISSN:
- 2624-909X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract BackgroundRecent developments to segment and characterize the regions of interest (ROI) within medical images have led to promising shape analysis studies. However, the procedures to analyze the ROI are arbitrary and vary by study. A tool to translate the ROI to analyzable shape representations and features is greatly needed. ResultsWe developed SAFARI (shape analysis for AI-segmented images), an open-source package with a user-friendly online tool kit for ROI labelling and shape feature extraction of segmented maps, provided by AI-algorithms or manual segmentation. We demonstrated that half of the shape features extracted by SAFARI were significantly associated with survival outcomes in a case study on 143 consecutive patients with stage I–IV lung cancer and another case study on 61 glioblastoma patients. ConclusionsSAFARI is an efficient and easy-to-use toolkit for segmenting and analyzing ROI in medical images. It can be downloaded from the comprehensive R archive network (CRAN) and accessed athttps://lce.biohpc.swmed.edu/safari/.more » « less
-
Hedden, Abigail S.; Mazzaro, Gregory J.; Raynal, Ann Marie (Ed.)Research into autonomous vehicles has focused on purpose-built vehicles with Lidar, camera, and radar systems. Many vehicles on the road today have sensors built into them to provide advanced driver assistance systems. In this paper we assess the ability of low-end automotive radar coupled with lightweight algorithms to perform scene segmentation. Results from a variety of scenes demonstrate the viability of this approach that complement existing autonomous driving systems.more » « less
-
Abstract We present Atacama Cosmology Telescope (ACT) Data Release 6 (DR6) maps of the Cosmic Microwave Background temperature and polarization anisotropy at arcminute resolution over three frequency bands centered on 98, 150 and 220 GHz. The maps are based on data collected with the AdvancedACT camera over the period 2017–2022 and cover 19,000 square degrees with a median combined depth of 10 μK arcmin. We describe the instrument, mapmaking and map properties and illustrate them with a number of figures and tables. The ACT DR6 maps and derived products are available on LAMBDA athttps://lambda.gsfc.nasa.gov/product/act/actadv_prod_table.html. We also provide an interactive web atlas athttps://phy-act1.princeton.edu/public/snaess/actpol/dr6/atlasand HiPS data sets in Aladin (e.g.https://alasky.cds.unistra.fr/ACT/DR4DR6/color_CMB).more » « less
-
SUMMARY Single-cell analysis has transformed our understanding of cellular diversity, offering insights into complex biological systems. Yet, manual data processing in single-cell studies poses challenges, including inefficiency, human error, and limited scalability. To address these issues, we propose the automated workflowcellSight, which integrates high-throughput sequencing in a user-friendly platform. By automating tasks like cell type clustering, feature extraction, and data normalization,cellSightreduces researcher workload, promoting focus on data interpretation and hypothesis generation. Its standardized analysis pipelines and quality control metrics enhance reproducibility, enabling collaboration across studies. Moreover,cellSight’s adaptability supports integration with emerging technologies, keeping pace with advancements in single-cell genomics.cellSightaccelerates discoveries in single-cell biology, driving impactful insights and clinical translation. It is available with documentation and tutorials athttps://github.com/omicsEye/cellSight.more » « less
An official website of the United States government

