skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Evaluating Alignment Approaches in Superimposed Time-Series and Temporal Event-Sequence Visualizations
Composite temporal event sequence visualizations have included sentinel event alignment techniques to cope with data volume and variety. Prior work has demonstrated the utility of using single-event alignment for understanding the precursor, co-occurring, and aftereffect events surrounding a sentinel event. However, the usefulness of single-event alignment has not been sufficiently evaluated in composite visualizations. Furthermore, recently proposed dual-event alignment techniques have not been empirically evaluated. In this work, we designed tasks around temporal event sequence and timing analysis and conducted a controlled experiment on Amazon Mechanical Turk to examine four sentinel event alignment approaches: no sentinel event alignment (NoAlign), single-event alignment (SingleAlign), dual-event alignment with left justification (DualLeft), and dual-event alignment with stretch justification (DualStretch). Differences between approaches were most pronounced with more rows of data. For understanding intermediate events between two sentinel events, dual-event alignment was the clear winner for correctness-71% vs. 18% for NoAlign and SingleAlign. For understanding the duration between two sentinel events, NoAlign was the clear winner: correctness-88% vs. 36% for DualStretch- completion time-55 seconds vs. 101 seconds for DualLeft-and error-1.5% vs. 8.4% for DualStretch. For understanding precursor and aftereffect events, there was no significant difference among approaches. A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/78fs5  more » « less
Award ID(s):
1755901
PAR ID:
10157013
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2019 IEEE Visualization Conference (VIS)
Page Range / eLocation ID:
1 - 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Temporal event sequence alignment has been used in many domains to visualize nuanced changes and interactions over time. Existing approaches align one or two sentinel events. Overview tasks require examining all alignments of interest using interaction and time or juxtaposition of many visualizations. Furthermore, any event attribute overviews are not closely tied to sequence visualizations. We present SEQUENCE BRAIDING, a novel overview visualization for temporal event sequences and attributes using a layered directed acyclic network. SEQUENCE BRAIDING visually aligns many temporal events and attribute groups simultaneously and supports arbitrary ordering, absence, and duplication of events. In a controlled experiment we compare SEQUENCE BRAIDING and IDMVis on user task completion time, correctness, error, and confidence. Our results provide good evidence that users of SEQUENCE BRAIDING can understand high-level patterns and trends faster and with similar error. A full version of this paper with all appendices; the evaluation stimuli, data, and analysis code; and source code are available at osf.io/mq2wt. 
    more » « less
  2. What is the relationship between language and event cognition? Past work has suggested that linguistic/aspectual distinctions encoding the internal temporal profile of events map onto nonlinguistic event representations. Here, we use a novel visual detection task to directly test the hypothesis that processing telic versus atelic sentences (e.g., “Ebony folded a napkin in 10 seconds” vs. “Ebony did some folding for 10 seconds”) can influence whether the very same visual event is processed as containing distinct temporal stages including a well‐defined endpoint or lacking such structure, respectively. In two experiments, we show that processing (a)telicity in language shifts how people later construe the temporal structure of identical visual stimuli. We conclude that event construals are malleable representations that can align with the linguistic framing of events. 
    more » « less
  3. Timelines are commonly represented on a horizontal line, which is not necessarily the most effective way to visualize temporal event sequences. However, few experiments have evaluated how timeline shape influences task performance. We present the design and results of a controlled experiment run on Amazon Mechanical Turk (n=192) in which we evaluate how timeline shape affects task completion time, correctness, and user preference. We tested 12 combinations of 4 shapes --- horizontal line, vertical line, circle, and spiral — and 3 data types — recurrent, non-recurrent, and mixed event sequences. We found good evidence that timeline shape meaningfully affects user task completion time but not correctness and that users have a strong shape preference. Building on our results, we present design guidelines for creating effective timeline visualizations based on user task and data types. A free copy of this paper, the evaluation stimuli and data, and code are available https://osf.io/qr5yu/ 
    more » « less
  4. Concurrency bugs are extremely difficult to detect. Recently, several dynamic techniques achieve sound analysis. M2 is even complete for two threads. It is designed to decide whether two events can occur consecutively. However, real-world concurrency bugs can involve more events and threads. Some can occur when the order of two or more events can be exchanged even if they occur not consecutively. We propose a new technique SeqCheck to soundly decide whether a sequence of events can occur in a specified order. The ordered sequence represents a potential concurrency bug. And several known forms of concurrency bugs can be easily encoded into event sequences where each represents a way that the bug can occur. To achieve it, SeqCheck explicitly analyzes branch events and includes a set of efficient algorithms. We show that SeqCheck is sound; and it is also complete on traces of two threads. We have implemented SeqCheck to detect three types of concurrency bugs and evaluated it on 51 Java benchmarks producing up to billions of events. Compared with M2 and other three recent sound race detectors, SeqCheck detected 333 races in ~30 minutes; while others detected from 130 to 285 races in ~6 to ~12 hours. SeqCheck detected 20 deadlocks in ~6 seconds. This is only one less than Dirk; but Dirk spent more than one hour. SeqCheck also detected 30 atomicity violations in ~20 minutes. The evaluation shows SeqCheck can significantly outperform existing concurrency bug detectors. 
    more » « less
  5. Abstract Modeling temporal event sequences on the vertices of a network is an important problem with widespread applications; examples include modeling influences in social networks, preventing crimes by modeling their space–time occurrences, and forecasting earthquakes. Existing solutions for this problem use a parametric approach, whose applicability is limited to event sequences following some well‐known distributions, which is not true for many real life event datasets. To overcome this limitation, in this work, we propose a composite recurrent neural network model for learning events occurring in the vertices of a network over time. Our proposed model combines two long short‐term memory units to capture base intensity and conditional intensity of an event sequence. We also introduce a second‐order statistic loss that penalizes higher divergence between the generated and the target sequence's distribution of hop count distance of consecutive events. Given a sequence of vertices of a network in which an event has occurred, the proposed model predicts the vertex where the next event would most likely occur. Experimental results on synthetic and real‐world datasets validate the superiority of our proposed model in comparison to various baseline methods. 
    more » « less