skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on January 28, 2026

Title: TROV - A Model and Vocabulary for Describing Transparent Research Objects
The Transparent Research Object Vocabulary (TROV) is a key element of the Transparency Certified (TRACE) approach to ensuring research trustworthiness. In contrast with methods that entail repeating computations in part or in full to verify that the descriptions of methods included in a publication are sufficient to reproduce reported results, the TRACE approach depends on a controlled computing environment termed a Transparent Research System (TRS) to guarantee that accurate, sufficiently complete, and otherwise trustworthy records are captured when results are obtained in the first place. Records identifying (1) the digital artifacts and computations that yielded a research result, (2) the TRS that witnessed the artifacts and supervised the computations, and (3) the specific conditions enforced by the TRS that warrant trust in these records, together constitute a Transparent Research Object (TRO). Digital signatures provided by the TRS and by a trusted third-party timestamp authority (TSA) guarantee the integrity and authenticity of the TRO. The controlled vocabulary TROV provides means to declare and query the properties of a TRO, to enumerate the dimensions of trustworthiness the TRS asserts for a TRO, and to verify that each such assertion is warranted by the documented capabilities of the TRS. Our approach for describing, publishing, and working with TROs imposes no restrictions on how computational artifacts are packaged or otherwise shared, and aims to be interoperable with, rather than to replace, current and future Research Object standards, archival formats, and repository layouts.  more » « less
Award ID(s):
2209630 2209629
PAR ID:
10612046
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Edinburgh University
Date Published:
Journal Name:
International Journal of Digital Curation
Volume:
19
Issue:
1
ISSN:
1746-8256
Page Range / eLocation ID:
7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Software traceability establishes a network of connections between diverse artifacts such as requirements, design, and code. However, given the cost and effort of creating and maintaining trace links manually, researchers have proposed automated approaches using information retrieval techniques. Current approaches focus almost entirely upon generating links between pairs of artifacts and have not leveraged the broader network of interconnected artifacts. In this paper we investigate the use of intermediate artifacts to enhance the accuracy of the generated trace links - focusing on paths consisting of source, target, and intermediate artifacts. We propose and evaluate combinations of techniques for computing semantic similarity, scaling scores across multiple paths, and aggregating results from multiple paths. We report results from five projects, including one large industrial project. We find that leveraging intermediate artifacts improves the accuracy of end-to-end trace retrieval across all datasets and accuracy metrics. After further analysis, we discover that leveraging intermediate artifacts is only helpful when a project's artifacts share a common vocabulary, which tends to occur in refinement and decomposition hierarchies of artifacts. Given our hybrid approach that integrates both direct and transitive links, we observed little to no loss of accuracy when intermediate artifacts lacked a shared vocabulary with source or target artifacts. 
    more » « less
  2. Synopsis Morphological features are the primary identifying properties of most animals and key to many comparative physiological studies, yet current techniques for preservation and documentation of soft-bodied marine animals are limited in terms of quality and accessibility. Digital records can complement physical specimens, with a wide array of applications ranging from species description to kinematics modeling, but options are lacking for creating models of soft-bodied semi-transparent underwater animals. We developed a lab-based technique that can live-scan semi-transparent, submerged animals, and objects within seconds. To demonstrate the method, we generated full three-dimensional reconstructions (3DRs) of an object of known dimensions for verification, as well as two live marine animals—a siphonophore and an amphipod—allowing detailed measurements on each. Techniques like these pave the way for faster data capture, integrative and comparative quantitative approaches, and more accessible collections of fragile and rare biological samples. 
    more » « less
  3. This work introduces Text2FX, a method that leverages CLAP embeddings and differentiable digital signal processing to control audio effects, such as equalization and reverberation, using open-vocabulary natural language prompts (e.g., ``make this sound in-your-face and bold''). Text2FX operates without retraining any models, relying instead on single-instance optimization within the existing embedding space, thus enabling a flexible, scalable approach to open-vocabulary sound transformations through interpretable and disentangled FX manipulation. We show that CLAP encodes valuable information for controlling audio effects and propose two optimization approaches using CLAP to map text to audio effect parameters. While we demonstrate with CLAP, this approach is applicable to any shared text-audio embedding space. Similarly, while we demonstrate with equalization and reverberation, any differentiable audio effect may be controlled. We conduct a listener study with diverse text prompts and source audio to evaluate the quality and alignment of these methods with human perception. Demos and code are available at anniejchu.github.io/text2fx 
    more » « less
  4. Abstract Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning. 
    more » « less
  5. Butt, Miriam; Findlay, Jamie Y; Toivonen, Ida (Ed.)
    In this paper we examine the argumenthood properties of Controlled Complement Clauses and Non-Complement Subordinate Clauses in O’dam. We show that in O’dam only controlled COMPs are arguments, while other putative complement clauses are adjunct relative clauses that elaborate on a pronominal OBJ incorporated in the matrix verb. We use the LRFG framework to capture both the argumenthood properties of the two types of clauses in O’dam as well as the patterns of object marking on the matrix verb by taking advantage of mismatches between c-structure (phrase structure and f-descriptions) and v-structure (the vocabulary items realizing this structure). 
    more » « less