skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparison of Tools for Digitally Tracking Changes in Text
Tracking changes in digital texts is a longstanding interface challenge, as early digital technologies left no recorded traces of alterations. Currently, two key categories of tools track text changes: code editing and word processing tools. Each has implemented different interface patterns to accomplish several goals: attributing change authorship, tracking the time of change, recording the change action taken, and specifying the location and content of the change. While some visual characteristics of change tracking are consistent across all tools, there are significant differences in change representation divided along the tool-type line, that may reflect their specific cultures of use. Overall, however, there is a limited range of visual methods for representing changes to digital text over time.  more » « less
Award ID(s):
1940670 1940679 1940713
PAR ID:
10378432
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
66
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
1365 to 1369
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As technology continues to shape how students read and write, digital literacy practices have become increasingly multimodal and complex—posing new challenges for researchers seeking to understand these processes in authentic educational settings. This paper presents three qualitative studies that use multimodal analyses and visual modeling to examine digital reading and writing across age groups, learning contexts, and literacy activities. The first study introduces collaborative composing snapshots, a method that visually maps third graders’ digital collaborative writing processes and highlights how young learners blend spoken, written, and visual modes in real-time online collaboration. The second study uses digital reading timescapes to track the multimodal reading behaviors of fifth graders—such as highlighting, re-reading, and gaze patterns—offering insights into how these actions unfold over time to support comprehension. The third study explores multimodal composing timescapes and transmediation visualizations to analyze how bilingual high school students compose across languages and modes, including text, image, and sounds. Together, these innovative methods illustrate the power of multimodal analysis and visual modeling for capturing the complexity of digital literacy development. They offer valuable tools for designing more inclusive, equitable, and developmentally responsive digital learning environments—particularly for culturally and linguistically diverse learners. 
    more » « less
  2. MaxMSP is a visual programming language for creating interactive audiovisual media that has found great success as a flexible and accessible option for computer music. However, the visual interface requires manual object placement and connection, which can be inefficient. Automated patch editing is possible either by visual programming with the [thispatcher] object or text-based programming with the [js] object. However, these objects cannot automatically create and save new patches, and they operate at run-time only, requiring live input to trigger patch construction. There is no solution for automated creation of multiple patches at \textitcompile-time, such that the constructed patches do not contain their own constructors. To this end, we present MaxPy, an open-source Python package for programmatic construction and manipulation of MaxMSP patches. MaxPy replaces the manual actions of placing objects, connecting patchcords, and saving patch files with text-based Python functions, thus enabling dynamic, procedural, high-volume patch generation at compile-time. MaxPy also includes the ability to import existing patches, allowing users to move freely between text-based Python programming and visual programming with the Max GUI. MaxPy enables composers, programmers, and creators to explore expanded possibilities for complex, dynamic, and algorithmic patch construction through text-based Python programming of MaxMSP. 
    more » « less
  3. null (Ed.)
    Table2Text systems generate textual output based on structured data utilizing machine learning. These systems are essential for fluent natural language interfaces in tools such as virtual assistants; however, left to generate freely these ML systems often produce misleading or unexpected outputs. GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text. The tool utilizes a deep learning model designed with explicit control states. These controls allow users to globally constrain model generations, without sacrificing the representation power of the deep learning models. The visual interface makes it possible for users to interact with AI systems following a Refine-Forecast paradigm to ensure that the generation system acts in a manner human users find suitable. We report multiple use cases on two experiments that improve over uncontrolled generation approaches, while at the same time providing fine-grained control. A demo and source code are available at https://genni.vizhub.ai. 
    more » « less
  4. Cryptographic tools for authenticating the provenance of web-based information are a promising approach to increasing trust in online news and information. However, making these tools’ technical assurances sufficiently usable for news consumers is essential to realizing their potential. We conduct an online study with 160 participants to investigate how the presentation (visual vs. textual) and location (on a news article page or a third-party site) of the provenance information affects news consumers’ perception of the content’s credibility and trustworthiness, as well as the usability of the tool itself. We find that although the visual presentation of provenance information is more challenging to adopt than its text-based counterpart, this approach leads its users to put more faith in the credibility and trustworthiness of digital news, especially when situated internally to the news article. 
    more » « less
  5. Blascheck, Tanja; Bradshaw, Jessica; Vrzakova, Hana (Ed.)
    Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience. 
    more » « less