skip to main content


Search for: All records

Award ID contains: 1954556

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Mungbean (Vigna radiata(L.) Wizcek) is an important pulse crop, increasingly used as a source of protein, fiber, low fat, carbohydrates, minerals, and bioactive compounds in human diets. Mungbean is a dicot plant with trifoliate leaves. The primary component of many plant functions, including photosynthesis, light interception, and canopy structure, are leaves. The objectives were to investigate leaf morphological attributes, use image analysis to extract leaf morphological traits from photos from the Iowa Mungbean Diversity (IMD) panel, create a regression model to predict leaflet area, and undertake association mapping. We collected over 5000 leaf images of the IMD panel consisting of 484 accessions over 2 years (2020 and 2021) with two replications per experiment. Leaf traits were extracted using image analysis, analyzed, and used for association mapping. Morphological diversity included leaflet type (oval or lobed), leaflet size (small, medium, large), lobed angle (shallow, deep), and vein coloration (green, purple). A regression model was developed to predict each ovate leaflet's area (adjustedR2 = 0.97; residual standard errors of < = 1.10). The candidate genesVradi01g07560,Vradi05g01240,Vradi02g05730, andVradi03g00440are associated with multiple traits (length, width, perimeter, and area) across the leaflets (left, terminal, and right). These are suitable candidate genes for further investigation in their role in leaf development, growth, and function. Future studies will be needed to correlate the observed traits discussed here with yield or important agronomic traits for use as phenotypic or genotypic markers in marker‐aided selection methods for mungbean crop improvement.

     
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  2. Abstract Background

    COVID‐19 has led to an unprecedented increase in the use of technology for teaching and learning in higher education institutions (HEIs), including in engineering, computing, and technology programs. Given the urgency of the situation, technologies were often implemented with a short‐term rather than long‐term view.

    Purpose

    In this study, we investigate students' perceptions of the use of video‐based monitoring (VbM) for proctoring exams to better assess its impact on students. We leverage technological ambivalence as a framing lens to analyze students' experiences and perceptions of using VbM and draw implications for responsible use of educational technology.

    Method

    Qualitative data were collected from students using focus group interviews and discussion board assignments and analyzed inductively to understand students' experiences.

    Findings

    We present a framework of how a technological shift of existing practice triggered ambivalence that manifested itself as a sustained negative outlook among students regarding the use of VbM, as well as their institution and instructors. Students accepted the inevitability of the technology but were unconvinced that the benefits of VbM outweighed its risks.

    Conclusions

    As instructors use educational technologies that are inherently driven by user data and algorithms that are not transparent, it is imperative that they are attentive to the responsible use of technology. To educate future engineers who are ethically and morally responsible, engineering educators and engineering institutions need to exhibit that behavior in their own practices, starting with their use of educational technologies.

     
    more » « less
  3. Introduction

    Computer vision and deep learning (DL) techniques have succeeded in a wide range of diverse fields. Recently, these techniques have been successfully deployed in plant science applications to address food security, productivity, and environmental sustainability problems for a growing global population. However, training these DL models often necessitates the large-scale manual annotation of data which frequently becomes a tedious and time-and-resource- intensive process. Recent advances in self-supervised learning (SSL) methods have proven instrumental in overcoming these obstacles, using purely unlabeled datasets to pre-train DL models.

    Methods

    Here, we implement the popular self-supervised contrastive learning methods of NNCLR Nearest neighbor Contrastive Learning of visual Representations) and SimCLR (Simple framework for Contrastive Learning of visual Representations) for the classification of spatial orientation and segmentation of embryos of maize kernels. Maize kernels are imaged using a commercial high-throughput imaging system. This image data is often used in multiple downstream applications across both production and breeding applications, for instance, sorting for oil content based on segmenting and quantifying the scutellum’s size and for classifying haploid and diploid kernels.

    Results and discussion

    We show that in both classification and segmentation problems, SSL techniques outperform their purely supervised transfer learning-based counterparts and are significantly more annotation efficient. Additionally, we show that a single SSL pre-trained model can be efficiently finetuned for both classification and segmentation, indicating good transferability across multiple downstream applications. Segmentation models with SSL-pretrained backbones produce DICE similarity coefficients of 0.81, higher than the 0.78 and 0.73 of those with ImageNet-pretrained and randomly initialized backbones, respectively. We observe that finetuning classification and segmentation models on as little as 1% annotation produces competitive results. These results show SSL provides a meaningful step forward in data efficiency with agricultural deep learning and computer vision.

     
    more » « less
  4. Abstract

    Pneumatically actuated soft continuum manipulators (SCMs) are constructed by combining several extending or contracting fiber reinforced elastomeric enclosure (FREE) actuators in series, parallel and a combination thereof. While it is well known that architectures with serial combinations of FREEs yield large workspace and dexterity, they suffer from design and control complexity, increased number of valves and inertia. Recent advances in exploring the FREE design space has demonstrated using parallel combinations of dissimilar FREEs (bending and rotating) to improve workspace and dexterity. This paper presents a comprehensive investigation of SCM design architectures by enumerating possibilities of serial and parallel combinations of similar and dissimilar FREEs. A novel dexterity metric is proposed to enable objective comparison of different SCM designs based on shape similarity and end-effector tangent. Given a fixed resource of control inputs (actuator and valve inputs), the paper systematically selects the best architecture of the SCM (serial, parallel, similar or dissimilar FREE) that maximizes dexterity and workspace. It is seen that optimal designs are heavily dependent on the context of the application, which may change how these manipulators are deployed. The paper presents two practical design applications that demonstrate the usefulness of the enumeration framework. While in general, serial design combinations using symmetric bending actuators result in larger workspace and dexterity, some architectures with asymmetric combinations of FREEs may see similar levels of dexterity and workspace.

     
    more » « less
    Free, publicly-accessible full text available August 20, 2024
  5. Abstract

    Insect pests cause significant damage to food production, so early detection and efficient mitigation strategies are crucial. There is a continual shift toward machine learning (ML)‐based approaches for automating agricultural pest detection. Although supervised learning has achieved remarkable progress in this regard, it is impeded by the need for significant expert involvement in labeling the data used for model training. This makes real‐world applications tedious and oftentimes infeasible. Recently, self‐supervised learning (SSL) approaches have provided a viable alternative to training ML models with minimal annotations. Here, we present an SSL approach to classify 22 insect pests. The framework was assessed on raw and segmented field‐captured images using three different SSL methods, Nearest Neighbor Contrastive Learning of Visual Representations (NNCLR), Bootstrap Your Own Latent, and Barlow Twins. SSL pre‐training was done on ResNet‐18 and ResNet‐50 models using all three SSL methods on the original RGB images and foreground segmented images. The performance of SSL pre‐training methods was evaluated using linear probing of SSL representations and end‐to‐end fine‐tuning approaches. The SSL‐pre‐trained convolutional neural network models were able to perform annotation‐efficient classification. NNCLR was the best performing SSL method for both linear and full model fine‐tuning. With just 5% annotated images, transfer learning with ImageNet initialization obtained 74% accuracy, whereas NNCLR achieved an improved classification accuracy of 79% for end‐to‐end fine‐tuning. Models created using SSL pre‐training consistently performed better, especially under very low annotation, and were robust to object class imbalances. These approaches help overcome annotation bottlenecks and are resource efficient.

     
    more » « less
  6. Free, publicly-accessible full text available May 1, 2025
  7. Free, publicly-accessible full text available February 1, 2025
  8. Free, publicly-accessible full text available October 18, 2024
  9. Free, publicly-accessible full text available October 1, 2024
  10. Free, publicly-accessible full text available October 1, 2024