Sort by: [year] [type] [author]

Optimal Control in Eye-Hand Coordination

Julian J. Tramper, Tom Erez, William D. Smart, and Stan C. A. M. Gielen.
In "Society for Neuroscience Annual Meeting: Neuroscience 2011", Washington, DC, 2011.

Many daily life activities require a tight coordination between the eyes and hands, both in space and time. For example, when picking up a cup, it is necessary to coordinate the hand reaching towards the cup (goal-directed behavior), with information-seeking behavior, such as determining the location and orientation of the cup, because of our uncertainty regarding the position of the cup and hand. A successful completion of this task requires coordination between perception and action. We use our gaze to perceive the state of the world and our body in it (the location of the cup relative to the body), and we change the state of the world by our own actions (e.g., using our hand to move the cup). At the same time, what we perceive depends on the state of our body (e.g. gaze direction) and can be changed by our own actions (e.g., refixating gaze on a new location).

In this study, we investigate the interaction between perception and action in a bimanual task in which subjects were instructed to move a left and right virtual hand to a common target within three seconds while passing between a pair of obstacles. The position of the obstacles changed at every trial, randomly rotating between seven different arrangements. During the experiment, eye movements and the position of the virtual hands were recorded. No instructions were given regarding eye movements during a trial. We examined in detail the gaze and hand trajectories, and the timing of gaze relative to hand position.

To predict the optimal coordination between the gaze and hands, we develop an optimal control model in which an agent had to perform the very same task as the subjects. We model the planar position of the gaze, hands, gates and target as a continuous-state partially-observable Markov decision process (POMDP). The agent occupies a believe state, which is a distribution over all possible states, representing the agent's ambiguous sense of the world. We define a reward function which penalizes a collision with an obstacle and rewards terminating near the target. The agent's goal is to maximize the expected cumulative reward within 3 seconds. By solving for a locally-optimal plan through belief space, we generate a coordinated movement of gaze and hands. A detailed comparison between model solution and the experimental findings demonstrate that human interaction between vision and motor control is in agreement with the optimal solution of the POMDP-model.

External link: [link]

  author = {Tramper, Julian J. and Erez, Tom and Smart, William D. and Gielen, Stan C. A. M.},
  title = {Optimal Control in Eye-Hand Coordination},
  booktitle = {Society for Neuroscience Annual Meeting: Neuroscience 2011},
  address = {Washington, DC},
  year = {2011}