ObjectiveThe aim of this study is to measure drivers’ attention to preview and their velocity and acceleration tracking error to evaluate two- and three-dimensional displays for following a winding roadway. BackgroundDisplay perturbation techniques and Fourier analysis of steering movements can be used to infer drivers’ spatio-temporal distribution of attention to preview. Fourier analysis of tracking error time histories provides measures of position, velocity, and acceleration error. MethodParticipants tracked a winding roadway with 1 s of preview in low-fidelity driving simulations. Position and rate-aided vehicle dynamics were paired with top-down and windshield displays of the roadway. ResultsFor both vehicle dynamics, tracking was smoother with the windshield display. This display emphasizes nearer preview positions and has a closer correspondence to the control-theoretic optimal attentional distributions for these tasks than the top-down display. This correspondence is interpreted as a form of stimulus–response compatibility. The position error and attentional signal-to-noise ratios did not differ between the two displays with position control, but with more complex rate-aided control much higher position error and much lower attentional signal-to-noise ratios occurred with the top-down display. ConclusionDisplay-driven influences on the distribution of attention may facilitate tracking with preview when they are similar to optimal attentional distributions derived from control theory. ApplicationDisplay perturbation techniques can be used to assess spatially distributed attention to evaluate displays and secondary tasks in the context of driving. This methodology can supplement eye movement measurements to determine what information is guiding drivers’ actions.
more »
« less
Angular offset distributions during fixation are, more often than not, multimodal
Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.
more »
« less
- Award ID(s):
- 1714623
- PAR ID:
- 10285744
- Date Published:
- Journal Name:
- Journal of Eye Movement Research
- Volume:
- 14
- Issue:
- 3
- ISSN:
- 1995-8692
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Intelligent systems to support collaborative learning rely on real-time behavioral data, including language, audio, and video. However, noisy data, such as word errors in speech recognition, audio static or background noise, and facial mistracking in video, often limit the utility of multimodal data. It is an open question of how we can build reliable multimodal models in the face of substantial data noise. In this paper, we investigate the impact of data noise on the recognition of confusion and conflict moments during collaborative programming sessions by 25 dyads of elementary school learners. We measure language errors with word error rate (WER), audio noise with speech-to-noise ratio (SNR), and video errors with frame-by-frame facial tracking accuracy. The results showed that the model’s accuracy for detecting confusion and conflict in the language modality decreased drastically from 0.84 to 0.73 when the WER exceeded 20%. Similarly, in the audio modality, the model’s accuracy decreased sharply from 0.79 to 0.61 when the SNR dropped below 5 dB. Conversely, the model’s accuracy remained relatively constant in the video modality at a comparable level (> 0.70) so long as at least one learner’s face was successfully tracked. Moreover, we trained several multimodal models and found that integrating multimodal data could effectively offset the negative effect of noise in unimodal data, ultimately leading to improved accuracy in recognizing confusion and conflict. These findings have practical implications for the future deployment of intelligent systems that support collaborative learning in actual classroom settings.more » « less
-
Abstract Total ice water content (IWC) derived from an isokinetic evaporator probe and ice crystal particle size distributions (PSDs) measured by a two-dimensional stereo probe and precipitation imaging probe installed on an aircraft during the 2014 European High Altitude Ice Crystals–North American High IWC field campaign (HAIC/HIWC) were used to characterize regions of high IWC consisting mainly of small ice crystals (HIWC_S) with IWC ≥ 1.0 g m−3and median mass diameter (MMD) < 0.5 mm. A novel fitting routine developed to automatically determine whether a unimodal, bimodal, or trimodal gamma distribution best fits a PSD was used to compare characteristics of HIWC_S and other PSDs (e.g., multimodality, gamma fit parameters) for HIWC_S simulations. The variation of these characteristics and bulk properties (MMD, IWC) was regressed with temperature, IWC, and vertical velocity. HIWC_S regions were most pronounced in updraft cores. The three modes of the PSD reveal different dominant processes contributing to ice growth: nucleation for maximum dimensionD< 0.15 mm, diffusion for 0.15 <D< 1.0 mm, and aggregation forD> 1.0 mm. The frequency of trimodal distributions increased with temperature. The volumes of equally plausible parameters derived in the phase space of gamma fit parameters increased with temperature for unimodal distributions and, for temperatures less than −27°C, for multimodal distributions. Bimodal distributions with 0.4 mm in the larger mode were most common in updraft cores and HIWC_S regions; bimodal distributions with 0.4 mm in the smaller mode were least common in convective cores.more » « less
-
This study presents a comprehensive analysis of three types of multimodal data‐response accuracy, response times, and eye‐tracking data‐derived from a computer‐based spatial rotation test. To tackle the complexity of high‐dimensional data analysis challenges, we have developed a methodological framework incorporating various statistical and machine learning methods. The results of our study reveal that hidden state transition probabilities, based on eye‐tracking features, may be contingent on skill mastery estimated from the fluency CDM model. The hidden state trajectory offers additional diagnostic insights into spatial rotation problem‐solving, surpassing the information provided by the fluency CDM alone. Furthermore, the distribution of participants across different hidden states reflects the intricate nature of visualizing objects in each item, adding a nuanced dimension to the characterization of item features. This complements the information obtained from item parameters in the fluency CDM model, which relies on response accuracy and response time. Our findings have the potential to pave the way for the development of new psychometric and statistical models capable of seamlessly integrating various types of multimodal data. This integrated approach promises more meaningful and interpretable results, with implications for advancing the understanding of cognitive processes involved in spatial rotation tests.more » « less
-
The brain estimates hand position using vision and position sense (proprioception). The relationship between visual and proprioceptive estimates is somewhat flexible: visual information about the index finger can be spatially displaced from proprioceptive information, resulting in cross-sensory recalibration of the visual and proprioceptive unimodal position estimates. According to the causal inference framework, recalibration occurs when the unimodal estimates are attributed to a common cause and integrated. If separate causes are perceived, then recalibration should be reduced. Here we assessed visuo-proprioceptive recalibration in response to a gradual visuo-proprioceptive mismatch at the left index fingertip. Experiment 1 asked how frequently a 70 mm mismatch is consciously perceived compared to when no mismatch is present, and whether awareness is linked to reduced visuo-proprioceptive recalibration, consistent with causal inference predictions. However, conscious offset awareness occurred rarely. Experiment 2 tested a larger displacement, 140 mm, and asked participants about their perception more frequently, including at 70 mm. Experiment 3 confirmed that participants were unbiased at estimating distances in the 2D virtual reality display. Results suggest that conscious awareness of the mismatch was indeed linked to reduced cross-sensory recalibration as predicted by the causal inference framework, but this was clear only at higher mismatch magnitudes (70–140 mm). At smaller offsets (up to 70 mm), conscious perception of an offset may not override unconscious belief in a common cause, perhaps because the perceived offset magnitude is in range of participants’ natural sensory biases. These findings highlight the interaction of conscious awareness with multisensory processes in hand perception.more » « less
An official website of the United States government

