skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Three Perceptual Tools for Seeing and Understanding Visualized Data
The visual system evolved and develops to process the scenes, faces, and objects of the natural world, but people adapt this powerful system to process data within an artificial world of visualizations. To extract patterns in data from these artificial displays, viewers appear to use at least three perceptual tools, including a tool that extracts global statistics, one that extracts shapes within the data, and one that produces sentence-like comparisons. A better understanding of the power, limits, and deployment of these tools would lead to better guidelines for designing effective data displays.  more » « less
Award ID(s):
1901485
PAR ID:
10350293
Author(s) / Creator(s):
Date Published:
Journal Name:
Current Directions in Psychological Science
Volume:
30
Issue:
5
ISSN:
0963-7214
Page Range / eLocation ID:
367 to 375
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The release and rapid diffusion of ChatGPT has forced teachers and researchers around the world to grapple with the consequences of artificial intelligence (AI) for education. For second language educators, AI-generated writing tools such as ChatGPT present special challenges that must be addressed to better support learners. We propose a five-part pedagogical framework that seeks to support second language learners through acknowledging both the immediate and long-term contexts in which we must teach students about these tools: understand, access, prompt, corroborate, and incorporate. By teaching our students how to effectively partner with AI, we can better prepare them for the changing landscape of technology use in the world beyond the classroom. 
    more » « less
  2. Recent advances in Large Language Models (LLMs) show the potential to significantly augment or even replace complex human writing activities. However, for complex tasks where people need to make decisions as well as write a justification, the trade offs between making work efficient and hindering decisions remain unclear. In this paper, we explore this question in the context of designing intelligent scaffolding for writing meta-reviews for an academic peer review process. We prototyped a system called MetaWriter'' trained on five years of open peer review data to support meta-reviewing. The system highlights common topics in the original peer reviews, extracts key points by each reviewer, and on request, provides a preliminary draft of a meta-review that can be further edited. To understand how novice and experienced meta-reviewers use MetaWriter, we conducted a within-subject study with 32 participants. Each participant wrote meta-reviews for two papers: one with and one without MetaWriter. We found that MetaWriter significantly expedited the authoring process and improved the coverage of meta-reviews, as rated by experts, compared to the baseline. While participants recognized the efficiency benefits, they raised concerns around trust, over-reliance, and agency. We also interviewed six paper authors to understand their opinions of using machine intelligence to support the peer review process and reported critical reflections. We discuss implications for future interactive AI writing tools to support complex synthesis work. 
    more » « less
  3. Memory Forensics is one of the most important emerging areas in computer forensics. In memory forensics, analysis of userland memory is a technique that analyses per-process runtime data structures and extracts significant evidence for application-specific investigations. In this research, our focus is to examine the critical challenges faced by process memory acquisition that can impact object and data recovery. Particularly, this research work seeks to address the issues of consistency and reliability in userland memory forensics on Android. In real-world investigations, memory acquisition tools record the information when the device is running. In such scenarios, each application’s memory content may be in flux due to updates that are in progress, garbage collection activities, changes in process states, etc. In this paper we focus on various runtime activities such as garbage collection and process states and the impact they have on object recovery in userland memory forensics. The outcome of the research objective is to assess the reliability of Android userland memory forensic tools by providing new research directions for efficiently developing a metric study to measure the reliability. We evaluated our research objective by analysing memory dumps acquired from 30 apps in different Process Acquisition Modes. The Process Acquisition Mode (PAM) is the memory dump of a process that is extracted while external runtime factors are triggered. Our research identified an inconsistency in the number of objects recovered from analysing the process memory dumps with runtime factors included. Particularly, the evaluation results revealed differences in the count of objects recovered in different acquisition modes. We utilized Euclidean distance and covariance as the metrics for our study. These two metrics enabled the authors to identify how the change in the number of recovered objects in PAM impact forensic analysis. Our conclusion revealed that runtime factors could on average result in about 20% data loss, thus revealing these factors can have an obvious impact on object recovery. 
    more » « less
  4. Sudeepa Roy and Jun Yang (Ed.)
    Data we encounter in the real-world such as printed menus, business documents, and nutrition labels, are often ad-hoc. Valuable insights can be gathered from this data when combined with additional information. Recent advances in computer vision and augmented reality have made it possible to understand and enrich such data. Joining real-world data with remote data stores and surfacing those enhanced results in place, within an augmented reality interface can lead to better and more informed decision-making capabilities. However, building end-user applications that perform these joins with minimal human effort is not straightforward. It requires a diverse set of expertise, including machine learning, database systems, computer vision, and data visualization. To address this complexity, we present Quill – a framework to develop end-to-end applications that model augmented reality applications as a join between real- world data and remote data stores. Using an intuitive domain-specific language, Quill accelerates the development of end-user applications that join real-world data with remote data stores. Through experiments on applications from multiple different domains, we show that Quill not only expedites the process of development, but also allows developers to build applications that are more performant than those built using standard developer tools, thanks to the ability to optimize declarative specifications. We also perform a user-focused study to investigate how easy (or difficult) it is to use Quill for developing augmented reality applications than other existing tools. Our results show that Quill allows developers to build and deploy applications with a lower technical background than building the same application using existing developer tools. 
    more » « less
  5. null (Ed.)
    The CS Education community has developed many educational tools in recent years, such as interactive exercises. Often the developer makes them freely available for use, hosted on their own server, and usually they are directly accessible within the instructor's LMS through the LTI protocol. As convenient as this can be, instructors using these third-party tools for their courses can experience issues related to data access and privacy concerns. The tools typically collect clickstream data on student use. But they might not make it easy for the instructor to access these data, and the institution might be concerned about privacy violations. While the developers might allow and even support local installation of the tool, this can be a difficult process unless the tool carefully designed for third-party installation. And integration of small tools within larger frameworks (like a type of interactive exercise within an eTextbook framework) is also difficult without proper design. This paper describes an ongoing containerization effort for the OpenDSA eTextbook project. Our goal is both to serve our needs by creating an easier-to-manage decomposition of the many tools and sub-servers required by this complex system, and also to provide an easily installable production environment that instructors can run locally. This new system provides better access to developer-level data analysis tools and potentially removes many FERPA-related privacy concerns. We also describe our efforts to integrate Caliper Analytics into OpenDSA to expand the data collection and analysis services. We hope that our containerization architecture can help provide a roadmap for similar projects to follow 
    more » « less