skip to main content


This content will become publicly available on May 1, 2024

Title: Visuo-proprioceptive recalibration and the sensorimotor map
Spatial perception of our hand is closely linked to our ability to move the hand accurately. We might therefore expect that reach planning would take into account any changes in perceived hand position; in other words, that perception and action relating to the hand should depend on a common sensorimotor map. However, there is evidence to suggest that changes in perceived hand position affect a body representation that functions separately from the body representation used to control movement. Here, we examined target-directed reaching before and after participants either did (Mismatch group) or did not (Veridical group) experience a cue conflict known to elicit recalibration in perceived hand position. For the reaching task, participants grasped a robotic manipulandum that positioned their unseen hand for each trial. Participants then briskly moved the handle straight ahead to a visual target, receiving no performance feedback. For the perceptual calibration task, participants estimated the locations of visual, proprioceptive, or combined cues about their unseen hand. The Mismatch group experienced a gradual 70-mm forward mismatch between visual and proprioceptive cues, resulting in forward proprioceptive recalibration. Participants made significantly shorter reaches after this manipulation, consistent with feeling their hand to be further forward than it was, but reaching performance returned to baseline levels after only 10 reaches. The Veridical group, after exposure to veridically aligned visual and proprioceptive cues about the hand, showed no change in reach distance. These results suggest that perceptual recalibration affects the same sensorimotor map that is used to plan target-directed reaches. NEW & NOTEWORTHY If perceived hand position changes, we might assume this affects the sensorimotor map and, in turn, reaches made with that hand. However, there is evidence for separate body representations involved in perception versus action. After a cross-sensory conflict that results in proprioceptive recalibration in the forward direction, participants made shorter reaches as predicted, but only briefly. This suggests perceptual recalibration does affect the sensorimotor map used to plan reaches, but the interaction may be short-lived.  more » « less
Award ID(s):
1753915
NSF-PAR ID:
10429247
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Neurophysiology
Volume:
129
Issue:
5
ISSN:
0022-3077
Page Range / eLocation ID:
1249 to 1258
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The brain estimates hand position using vision and position sense (proprioception). The relationship between visual and proprioceptive estimates is somewhat flexible: visual information about the index finger can be spatially displaced from proprioceptive information, resulting in cross-sensory recalibration of the visual and proprioceptive unimodal position estimates. According to the causal inference framework, recalibration occurs when the unimodal estimates are attributed to a common cause and integrated. If separate causes are perceived, then recalibration should be reduced. Here we assessed visuo-proprioceptive recalibration in response to a gradual visuo-proprioceptive mismatch at the left index fingertip. Experiment 1 asked how frequently a 70 mm mismatch is consciously perceived compared to when no mismatch is present, and whether awareness is linked to reduced visuo-proprioceptive recalibration, consistent with causal inference predictions. However, conscious offset awareness occurred rarely. Experiment 2 tested a larger displacement, 140 mm, and asked participants about their perception more frequently, including at 70 mm. Experiment 3 confirmed that participants were unbiased at estimating distances in the 2D virtual reality display. Results suggest that conscious awareness of the mismatch was indeed linked to reduced cross-sensory recalibration as predicted by the causal inference framework, but this was clear only at higher mismatch magnitudes (70–140 mm). At smaller offsets (up to 70 mm), conscious perception of an offset may not override unconscious belief in a common cause, perhaps because the perceived offset magnitude is in range of participants’ natural sensory biases. These findings highlight the interaction of conscious awareness with multisensory processes in hand perception. 
    more » « less
  2. Abstract When visual and proprioceptive estimates of hand position disagree (e.g., viewing the hand underwater), the brain realigns them to reduce mismatch. This perceptual change is reflected in primary motor cortex (M1) excitability, suggesting potential relevance for hand movement. Here, we asked whether fingertip visuo-proprioceptive misalignment affects only the brain’s representation of that finger (somatotopically focal), or extends to other parts of the limb that would be needed to move the misaligned finger (somatotopically broad). In Experiments 1 and 2, before and after misaligned or veridical visuo-proprioceptive training at the index finger, we used transcranial magnetic stimulation to assess M1 representation of five hand and arm muscles. The index finger representation showed an association between M1 excitability and visuo-proprioceptive realignment, as did the pinkie finger representation to a lesser extent. Forearm flexors, forearm extensors, and biceps did not show any such relationship. In Experiment 3, participants indicated their proprioceptive estimate of the fingertip, knuckle, wrist, and elbow, before and after misalignment at the fingertip. Proprioceptive realignment at the knuckle, but not the wrist or elbow, was correlated with realignment at the fingertip. These results suggest the effects of visuo-proprioceptive mismatch are somatotopically focal in both sensory and motor domains. 
    more » « less
  3. Motor learning in visuomotor adaptation tasks results from both explicit and implicit processes, each responding differently to an error signal. Although the motor output side of these processes has been extensively studied, the visual input side is relatively unknown. We investigated if and how depth perception affects the computation of error information by explicit and implicit motor learning. Two groups of participants made reaching movements to bring a virtual cursor to a target in the frontoparallel plane. The Delayed group was allowed to reaim and their feedback was delayed to emphasize explicit learning, whereas the camped group received task-irrelevant clamped cursor feedback and continued to aim straight at the target to emphasize implicit adaptation. Both groups played this game in a highly detailed virtual environment (depth condition), leveraging a cover task of playing darts in a virtual tavern, and in an empty environment (no-depth condition). The delayed group showed an increase in error sensitivity under depth relative to no-depth. In contrast, the clamped group adapted to the same degree under both conditions. The movement kinematics of the delayed participants also changed under the depth condition, consistent with the target appearing more distant, unlike the Clamped group. A comparison of the delayed behavioral data with a perceptual task from the same individuals showed that the greater reaiming in the depth condition was consistent with an increase in the scaling of the error distance and size. These findings suggest that explicit and implicit learning processes may rely on different sources of perceptual information. NEW & NOTEWORTHY We leveraged a classic sensorimotor adaptation task to perform a first systematic assessment of the role of perceptual cues in the estimation of an error signal in the 3-D space during motor learning. We crossed two conditions presenting different amounts of depth information, with two manipulations emphasizing explicit and implicit learning processes. Explicit learning responded to the visual conditions, consistent with perceptual reports, whereas implicit learning appeared to be independent of them. 
    more » « less
  4. Abstract

    Hand position can be estimated by vision and proprioception (position sense). The brain is thought to weight and integrate these percepts to form a multisensory estimate of hand position with which to guide movement. Force field adaptation, a type of cerebellum-dependent motor learning, is associated with both motor and proprioceptive changes. The cerebellum has connections with multisensory parietal regions; however, it is unknown if force adaptation is associated with changes in multisensory perception. If force adaptation affects all relevant sensory modalities similarly, the brain’s weighting of vision vs. proprioception should be maintained. Alternatively, if force perturbation is interpreted as somatosensory unreliability, vision may be up-weighted relative to proprioception. We assessed visuo-proprioceptive weighting with a perceptual estimation task before and after subjects performed straight-ahead reaches grasping a robotic manipulandum. Each subject performed one session with a clockwise or counter-clockwise velocity-dependent force field, and one session in a null field. Subjects increased their weight of vision vs. proprioception in the force field session relative to the null session, regardless of force field direction, in the straight-ahead dimension (F1,44 = 5.13, p = 0.029). This suggests that force field adaptation is associated with an increase in the brain’s weighting of vision vs. proprioception.

     
    more » « less
  5. null (Ed.)
    The human ability to use different tools demonstrates our capability of forming and maintaining multiple, context-specific motor memories. Experimentally, this has been investigated in dual adaptation, where participants adjust their reaching movements to opposing visuomotor transformations. Adaptation in these paradigms occurs by distinct processes, such as strategies for each transformation or the implicit acquisition of distinct visuomotor mappings. Although distinct, transformation-dependent aftereffects have been interpreted as support for the latter, they could reflect adaptation of a single visuomotor map, which is locally adjusted in different regions of the workspace. Indeed, recent studies suggest that explicit aiming strategies direct where in the workspace implicit adaptation occurs, thus potentially serving as a cue to enable dual adaptation. Disentangling these possibilities is critical to understanding how humans acquire and maintain motor memories for different skills and tools. We therefore investigated generalization of explicit and implicit adaptation to untrained movement directions after participants practiced two opposing cursor rotations, which were associated with the visual display being presented in the left or right half of the screen. Whereas participants learned to compensate for opposing rotations by explicit strategies specific to this visual workspace cue, aftereffects were not cue sensitive. Instead, aftereffects displayed bimodal generalization patterns that appeared to reflect locally limited learning of both transformations. By varying target arrangements and instructions, we show that these patterns are consistent with implicit adaptation that generalizes locally around movement plans associated with opposing visuomotor transformations. Our findings show that strategies can shape implicit adaptation in a complex manner. NEW & NOTEWORTHY Visuomotor dual adaptation experiments have identified contextual cues that enable learning of separate visuomotor mappings, but the underlying representations of learning are unclear. We report that visual workspace separation as a contextual cue enables the compensation of opposing cursor rotations by a combination of explicit and implicit processes: Learners developed context-dependent explicit aiming strategies, whereas an implicit visuomotor map represented dual adaptation independent from arbitrary context cues by local adaptation around the explicit movement plan. 
    more » « less