Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract In the biomedical domain, taxonomies organize the acquisition modalities of scientific images in hierarchical structures. Such taxonomies leverage large sets of correct image labels and provide essential information about the importance of a scientific publication, which could then be used in biocuration tasks. However, the hierarchical nature of the labels, the overhead of processing images, the absence or incompleteness of labelled data and the expertise required to label this type of data impede the creation of useful datasets for biocuration. From a multi‐year collaboration with biocurators and text‐mining researchers, we derive an iterative visual analytics and active learning (AL) strategy to address these challenges. We implement this strategy in a system called BI‐LAVA—Biocuration with Hierarchical Image Labelling through Active Learning and Visual Analytics. BI‐LAVA leverages a small set of image labels, a hierarchical set of image classifiers and AL to help model builders deal with incomplete ground‐truth labels, target a hierarchical taxonomy of image modalities and classify a large pool of unlabelled images. BI‐LAVA's front end uses custom encodings to represent data distributions, taxonomies, image projections and neighbourhoods of image thumbnails, which help model builders explore an unfamiliar image dataset and taxonomy and correct and generate labels. An evaluation with machine learning practitioners shows that our mixed human–machine approach successfully supports domain experts in understanding the characteristics of classes within the taxonomy, as well as validating and improving data quality in labelled and unlabelled collections.more » « lessFree, publicly-accessible full text available February 1, 2026
-
Abstract MotivationFigures in biomedical papers communicate essential information with the potential to identify relevant documents in biomedical and clinical settings. However, academic search interfaces mainly search over text fields. ResultsWe describe a search system for biomedical documents that leverages image modalities and an existing index server. We integrate a problem-specific taxonomy of image modalities and image-based data into a custom search system. Our solution features a front-end interface to enhance classical document search results with image-related data, including page thumbnails, figures, captions and image-modality information. We demonstrate the system on a subset of the CORD-19 document collection. A quantitative evaluation demonstrates higher precision and recall for biomedical document retrieval. A qualitative evaluation with domain experts further highlights our solution’s benefits to biomedical search. Availability and implementationA demonstration is available at https://runachay.evl.uic.edu/scholar. Our code and image models can be accessed via github.com/uic-evl/bio-search. The dataset is continuously expanded.more » « less
-
Abstract Developing applicable clinical machine learning models is a difficult task when the data includes spatial information, for example, radiation dose distributions across adjacent organs at risk. We describe the co‐design of a modeling system, DASS, to support the hybrid human‐machine development and validation of predictive models for estimating long‐term toxicities related to radiotherapy doses in head and neck cancer patients. Developed in collaboration with domain experts in oncology and data mining, DASS incorporates human‐in‐the‐loop visual steering, spatial data, and explainable AI to augment domain knowledge with automatic data mining. We demonstrate DASS with the development of two practical clinical stratification models and report feedback from domain experts. Finally, we describe the design lessons learned from this collaborative experience.more » « less
-
We present the development, architecture, and features of a new multi-device mHealth software platform to support near real-time remote monitoring of metabolic health and timely intervention in the treatment and survivorship of cancer patients. Our platform, mEnergy, leverages a human- centered design process, and integrates in a unified, web-based framework consumer-grade hardware—Fitbit wearable sensor devices, smartphones, and Withings smart scales. mEnergy can aid oncologists in identifying early indicators of muscle-wasting (sarcopenia) due to sleep disturbance, insufficient weight recov- ery, or reduced/limited activity. The platform aims for a smooth transition into clinical practice and increased adherence to evidence-based recommendations, in particular in underserved geographical areas. This toxicity-surveillance approach based on mHealth technologies can improve treatment outcomes, quality of life, and survivorshipmore » « lessFree, publicly-accessible full text available July 14, 2026
-
Free, publicly-accessible full text available June 1, 2026
-
September 2023 marked the 50th anniversary of the Electronic Visualization Laboratory (EVL). This paper summarizes EVL’s efforts in Visual Data Science, with a focus on the many networked, immersive, collaborative visualization and virtual-reality (VR) systems and applications the Lab has developed and deployed, as well as lessons learned and future plans.more » « lessFree, publicly-accessible full text available March 8, 2026
-
Free, publicly-accessible full text available February 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
An official website of the United States government
