Fri, June 11
Speaker: Robert Scheidt, PhD
Title: Reorganization of coordination among redundant control signals during adaptation to rotation and scaling distortions of a newly learned sensorimotor transformation
Abstract: Neuromotor control is inherently redundant in that a vast number of efferent signals influence the low-dimensional, dynamic behavior of the body. An important unanswered question in the study of goal-directed movements is how the brain learns to coordinate changes within the set of redundant control parameters (eg. motor cortical activities, spinal stretch reflex thresholds, muscle forces, joint torques, etc.) to produce desired changes in the state of a controlled endpoint (eg. hand kinematics and/or kinetics). We are addressing this question in a series of experiments wherein subjects wear a data glove instrumented with a large number of bend sensors. Signals from these sensors drive motion of a cursor on a computer screen via a linear hand-to-screen projection matrix: Each hand configuration projects onto only one screen location, but each screen location can be achieved using an infinite number of hand configurations. Subjects capture screen targets with the cursor by forming gestures with the fingers. This task is unique in that the hand-to-screen projection operator establishes a clear separation between control degrees of freedom contributing to kinematic performance and those that do not.
Previous experiments using this approach found that when subjects receive real-time visual feedback of cursor motion, they learn how spatial relationship between screen targets may be used to constrain the choice of hand postures. This learning generalized beyond regions of the screen workspace explored during training, allowing subjects to successfully acquire new targets, Practicing the target capture task induced the formation of finger coordination patterns such that motion in degrees of freedom not contributing to cursor motion was reduced. These observations provide compelling evidence that subjects learned an inverse of the hand-to-screen mapping, thereby learning a 'motor representation' of the Euclidean space onto which finger movements were mapped.
Here we present recent experiments assessing the stability of these newly-learned coordination patterns when the nominal hand-to-screen mapping is distorted through application of a simple scaling or rotation transformation of cursor motion on the screen. We sought to revisit the hypothesis that the brain processes errors in movement direction and extent separately and in computationally distinct ways during learning (Bock, 1992; Krakauer et al., 2000; Vindras et al, 2005). Specifically we tested whether extent errors induce learning of a single gain factor applied globally across all movement directions and extents whereas direction errors induce learning of ‘rotation parameters’ that only generalize to movements of different extent made in the trained direction (Krakauer et al., 2000).