Tue, April 27
Speaker: Ganesh Gowrishankar, PhD (ATR Computational Neuroscience Laboratories, Kyoto, Japan)
Title: Mechanisms of motor learning: in humans, for robots
Abstract: It is amazing to observe how the human central nervous system (CNS) integrates various sensory signals to control the complex system, the human body, and learn new motor tasks. Even a human baby exhibits better motor control and adaptation capabilities than the best of current robots. This exhibits that we still have a lot to learn about the human CNS. My research aims at understanding the control and learning mechanisms in humans, through psychophysical and imaging experiments, and applying the learning to robotics. Application in robotics serves to improve robots and to show that computational models proposed can work on real systems.
In the first part of my talk I will present an experiment investigating motor optimization in a task featuring multiple solutions (multiple error- effort optima). We introduce a novel multi-solution co-activation paradigm which enables us to let subjects repetitively (but inconspicuously) use different solutions, and observe how exploration of multiple solutions affect their motor behavior. The results show that the behavior is largely influenced by motor memory, with subjects tending to involuntarily repeat a recent suboptimal task-satisfying solution, even after sufficient experience of the optimal solution. This suggests that the CNS does not optimize co-activation tasks globally, but determines the motor behavior in a trade-off of motor memory, error and effort minimization.
Interaction of a robot with dynamic environments rquires continuous adaptation of force and impedance, which is generally not available in current robot systems. In contrast, humans learn novel task dynamics with appropriate force and impedance through the concurrent minimization of error and energy, and exhibit the ability to modify movement trajectory to comply with obstacles and minimize forces. In the second part of the talk I will present recent work done as part of the EU VIACTORS project to implement a bio-mimetic motor learning algorithm to enable human like, task dependent adaptation of feed forward torque, impedance and trajectory in robots.
Host: Max Berniker