skip to main content


Title: Understanding People's Perceptions of Approaches to Semi-Automated Dietary Monitoring
The respective benefits and drawbacks of manual food journaling and automated dietary monitoring (ADM) suggest the value of semi-automated journaling systems combining the approaches. However, the current understanding of how people anticipate strategies for implementing semi-automated food journaling systems is limited. We therefore conduct a speculative survey study with 600 responses, examining how people anticipate approaches to automatic capture and prompting for details. Participants feel the location and detection capability of ADM sensors influences anticipated physical, social, and privacy burdens. People more positively anticipate prompts which contain information relevant to their journaling goals, help them recall what they ate, and are quick to respond to. Our work suggests a tradeoff between ADM systems' detection performance and anticipated acceptability, with sensors on facial areas having higher performance but lower acceptability than sensors in other areas and more usable prompting methods like those containing specific foods being more challenging to produce than manual reminders. We suggest opportunities to improve higher-acceptability, lower-accuracy ADM sensors, select approaches based on individual and practitioner journaling needs, and better describe capabilities to potential users.  more » « less
Award ID(s):
1850389
NSF-PAR ID:
10376307
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
6
Issue:
3
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Plants, and the biological systems around them, are key to the future health of the planet and its inhabitants. The Plant Science Decadal Vision 2020–2030 frames our ability to perform vital and far‐reaching research in plant systems sciences, essential to how we value participants and apply emerging technologies. We outline a comprehensive vision for addressing some of our most pressing global problems through discovery, practical applications, and education. The Decadal Vision was developed by the participants at the Plant Summit 2019, a community event organized by the Plant Science Research Network. The Decadal Vision describes a holistic vision for the next decade of plant science that blends recommendations for research, people, and technology. Going beyond discoveries and applications, we, the plant science community, must implement bold, innovative changes to research cultures and training paradigms in this era of automation, virtualization, and the looming shadow of climate change. Our vision and hopes for the next decade are encapsulated in the phrase reimagining the potential of plants for a healthy and sustainable future. The Decadal Vision recognizes the vital intersection of human and scientific elements and demands an integrated implementation of strategies for research (Goals 1–4), people (Goals 5 and 6), and technology (Goals 7 and 8). This report is intended to help inspire and guide the research community, scientific societies, federal funding agencies, private philanthropies, corporations, educators, entrepreneurs, and early career researchers over the next 10 years. The research encompass experimental and computational approaches to understanding and predicting ecosystem behavior; novel production systems for food, feed, and fiber with greater crop diversity, efficiency, productivity, and resilience that improve ecosystem health; approaches to realize the potential for advances in nutrition, discovery and engineering of plant‐based medicines, and green infrastructure. Launching the Transparent Plant will use experimental and computational approaches to break down the phytobiome into a parts store that supports tinkering and supports query, prediction, and rapid‐response problem solving. Equity, diversity, and inclusion are indispensable cornerstones of realizing our vision. We make recommendations around funding and systems that support customized professional development. Plant systems are frequently taken for granted therefore we make recommendations to improve plant awareness and community science programs to increase understanding of scientific research. We prioritize emerging technologies, focusing on non‐invasive imaging, sensors, and plug‐and‐play portable lab technologies, coupled with enabling computational advances. Plant systems science will benefit from data management and future advances in automation, machine learning, natural language processing, and artificial intelligence‐assisted data integration, pattern identification, and decision making. Implementation of this vision will transform plant systems science and ripple outwards through society and across the globe. Beyond deepening our biological understanding, we envision entirely new applications. We further anticipate a wave of diversification of plant systems practitioners while stimulating community engagement, underpinning increasing entrepreneurship. This surge of engagement and knowledge will help satisfy and stoke people's natural curiosity about the future, and their desire to prepare for it, as they seek fuller information about food, health, climate and ecological systems.

     
    more » « less
  2. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  3. null (Ed.)
    International Ocean Discovery Program (IODP) Expedition 357 successfully cored an east–west transect across the southern wall of Atlantis Massif on the western flank of the Mid-Atlantic Ridge to study the links between serpentinization processes and microbial activity in the shallow subsurface of highly altered ultramafic and mafic sequences that have been uplifted to the seafloor along a major detachment fault zone. The primary goals of this expedition were to (1) examine the role of serpentinization in driving hydrothermal systems, sustaining microbial communities, and sequestering carbon; (2) characterize the tectonomagmatic processes that lead to lithospheric heterogeneities and detachment faulting; and (3) assess how abiotic and biotic processes change with variations in rock type and progressive exposure on the seafloor. To accomplish these objectives, we developed a coring and sampling strategy based around the use of seabed rock drills—the first time that such systems have been used in the scientific ocean drilling programs. This technology was chosen in hopes of achieving high recovery of the carbonate cap sequences and intact contact and deformation relationships. The expedition plans also included several engineering developments to assess geochemical parameters during drilling; sample bottom water before and after drilling; supply synthetic tracers during drilling for contamination assessment; gather downhole electrical resistivity and magnetic susceptibility logs for assessing fractures, fluid flow, and extent of serpentinization; and seal boreholes to provide opportunities for future experiments. Seventeen holes were drilled at nine sites across Atlantis Massif, with two sites on the eastern end of the southern wall (Sites M0068 and M0075), three sites in the central section of the southern wall north of the Lost City hydrothermal field (Sites M0069, M0072, and M0076), two sites on the western end (Sites M0071 and M0073), and two sites north of the southern wall in the direction of the central dome of the massif and Integrated Ocean Drilling Program Site U1309 (Sites M0070 and M0074). Use of seabed rock drills enabled collection of more than 57 m of core, with borehole penetration ranging from 1.3 to 16.44 meters below seafloor and core recoveries as high as 75% of total penetration. This high level of recovery of shallow mantle sequences is unprecedented in the history of ocean drilling. The cores recovered along the southern wall of Atlantis Massif have highly heterogeneous lithologies, types of alteration, and degrees of deformation. The ultramafic rocks are dominated by harzburgites with intervals of dunite and minor pyroxenite veins, as well as gabbroic rocks occurring as melt impregnations and veins, all of which provide information about early magmatic processes and the magmatic evolution in the southernmost portion of Atlantis Massif. Dolerite dikes and basaltic rocks represent the latest stage of magmatic activity. Overall, the ultramafic rocks recovered during Expedition 357 revealed a high degree of serpentinization, as well as metasomatic talc-amphibole-chlorite overprinting and local rodingitization. Metasomatism postdates an early phase of serpentinization but predates late-stage intrusion and alteration of dolerite dikes and the extrusion of basalt. The intensity of alteration is generally lower in the gabbroic and doleritic rocks. Chilled margins in dolerite intruded into talc-amphibole-chlorite schists are observed at the most eastern Site M0075. Deformation in Expedition 357 cores is variable and dominated by brecciation and formation of localized shear zones; the degree of carbonate veining was lower than anticipated. All types of variably altered and deformed ultramafic and mafic rocks occur as components in sedimentary breccias and as fault scarp rubble. The sedimentary cap rocks include basaltic breccias with a carbonate sand matrix and/or fossiliferous carbonate. Fresh glass on basaltic components was observed in some of the breccias. The expedition also successfully applied new technologies, namely (1) extensively using an in situ sensor package and water sampling system on the seabed drills for evaluating real-time dissolved oxygen and methane, pH, oxidation-reduction potential, temperature, and conductivity during drilling; (2) deploying a borehole plug system for sealing seabed drill boreholes at four sites to allow access for future sampling; and (3) proving that tracers can be delivered into drilling fluids when using seabed drills. The rock drill sensor packages and water sampling enabled detection of elevated dissolved methane and hydrogen concentrations during and/or after drilling, with “hot spots” of hydrogen observed over Sites M0068–M0072 and methane over Sites M0070–M0072. Shipboard determination of contamination tracer delivery confirmed appropriate sample handling procedures for microbiological and geochemical analyses, which will aid all subsequent microbiological investigations that are part of the science party sampling plans, as well as verify this new tracer delivery technology for seabed drill rigs. Shipboard investigation of biomass density in select samples revealed relatively low and variable cell densities, and enrichment experiments set up shipboard reveal growth. Thus, we anticipate achieving many of the deep biosphere–related objectives of the expedition through continued scientific investigation in the coming years. Finally, although not an objective of the expedition, we were serendipitously able to generate a high-resolution (20 m per pixel) multibeam bathymetry map across the entire Atlantis Massif and the nearby fracture zone, Mid-Atlantic Ridge, and eastern conjugate, taking advantage of weather and operational downtime. This will assist science party members in evaluating and interpreting tectonic and mass-wasting processes at Atlantis Massif. 
    more » « less
  4. Abstract Deep generative models have shown significant promise in improving performance in design space exploration. But there is limited understanding of their interpretability, a necessity when model explanations are desired and problems are ill-defined. Interpretability involves learning design features behind design performance, called designer learning. This study explores human–machine collaboration’s effects on designer learning and design performance. We conduct an experiment (N = 42) designing mechanical metamaterials using a conditional variational autoencoder. The independent variables are: (i) the level of automation of design synthesis, e.g., manual (where the user manually manipulates design variables), manual feature-based (where the user manipulates the weights of the features learned by the encoder), and semi-automated feature-based (where the agent generates a local design based on a start design and user-selected step size); and (ii) feature semanticity, e.g., meaningful versus abstract features. We assess feature-specific learning using item response theory and design performance using utopia distance and hypervolume improvement. The results suggest that design performance depends on the subjects’ feature-specific knowledge, emphasizing the precursory role of learning. The semi-automated synthesis locally improves the utopia distance. Still, it does not result in higher global hypervolume improvement compared to manual design synthesis and reduced designer learning compared to manual feature-based synthesis. The subjects learn semantic features better than abstract features only when design performance is sensitive to them. Potential cognitive constructs influencing learning in human–machine collaborative settings are discussed, such as cognitive load and recognition heuristics. 
    more » « less
  5. null (Ed.)
    Measurement of ice nucleation (IN) temperature of liquid solutions at sub-ambient temperatures has applications in atmospheric, water quality, food storage, protein crystallography and pharmaceutical sciences. Here we present details on the construction of a temperature-controlled microfluidic platform with multiple individually addressable temperature zones and on-chip temperature sensors for high-throughput IN studies in droplets. We developed, for the first time, automated droplet freezing detection methods in a microfluidic device, using a deep neural network (DNN) and a polarized optical method based on intensity thresholding to classify droplets without manual counting. This platform has potential applications in continuous monitoring of liquid samples consisting of aerosols to quantify their IN behavior, or in checking for contaminants in pure water. A case study of the two detection methods was performed using Snomax® (Snomax International, Englewood, CO, USA), an ideal ice nucleating particle (INP). Effects of aging and heat treatment of Snomax® were studied with Fourier transform infrared (FTIR) spectroscopy and a microfluidic platform to correlate secondary structure change of the IN protein in Snomax® to IN temperature. It was found that aging at room temperature had a mild impact on the ice nucleation ability but heat treatment at 95 °C had a more pronounced effect by reducing the ice nucleation onset temperature by more than 7 °C and flattening the overall frozen fraction curve. Results also demonstrated that our setup can generate droplets at a rate of about 1500/min and requires minimal human intervention for DNN classification. 
    more » « less