Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Visualization grammars are gaining popularity as they allow visualization specialists and experienced users to quickly create static and interactive views. Existing grammars, however, mostly focus on abstract views, ignoring three-dimensional (3D) views, which are very important in fields such as natural sciences. We propose a generalized interaction grammar for the problem of coordinating heterogeneous view types, such as standard charts (e.g., based on Vega-Lite) and 3D anatomical views. An important aspect of our web-based framework is that user interactions with data items at various levels of detail can be systematically integrated and used to control the overall layout of the application workspace. With the help of a concise JSON-based specification of the intended workflow, we can handle complex interactive visual analysis scenarios. This enables rapid prototyping and iterative refinement of the visual analysis tool in collaboration with domain experts. We illustrate the usefulness of our framework in two real-world case studies from the field of neuroscience. Since the logic of the presented grammar-based approach for handling interactions between heterogeneous web-based views is free of any application specifics, it can also serve as a template for applications beyond biological research.more » « less
-
Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.more » « less
-
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them occlusion-free while keeping visual linkings legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers the current and predicted future states of objects and labels, such as positions and velocities, as well as the user’s viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. Our experiments on two real-world datasets show that RL-LABEL effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on AR labels within dynamic scenes.more » « less
-
Rowing requires physical strength and endurance in athletes as well as a precise rowing technique. The ideal rowing stroke is based on biomechanical principles and typically takes years to master. Except for time-consuming video analysis after practice, coaches currently have no means to quantitatively analyze a rower’s stroke sequence and body movement. We propose ARrow, an AR application for coaches and athletes that provides real-time and situated feedback on a rower’s body position and stroke. We use computer vision techniques to extract the rower’s 3D skeleton and to detect the rower’s stroke cycle. ARrow provides visual feedback on three levels: Tracking of basic performance metrics over time, visual feedback and guidance on a rower’s stroke sequence, and a rowing ghost view that helps synchronize the body movement of two rowers. We developed ARrow in close collaboration with international rowing coaches and demonstrated its usefulness in a user study with athletes and coaches.more » « less
-
Recent advances in high-resolution connectomics provide researchers access to accurate reconstructions of vast neuronal circuits and brain networks for the first time. Neuroscientists anticipate analyzing these networks to gain a better understanding of information processing in the brain. In particular, scientists are interested in identifying specific network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to the detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings of the involved neurons and synapses. To reduce visual clutter and simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by continuous visual abstractions [MAAB∗18] that allows the user to transition from the highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly to form a longer synaptic chain. We evaluate Vimo in a user study with seven domain experts and an in-depth case study on motifs in the central complex (CX) of the fruit fly brain.more » « less