Visual Perception in Robot Teleoperation

This project is part of the Tele-Robotic Intelligent Nursing Assistant (TRINA) family of research in the Human-Inspired Robotics Lab.

As teleoperation technologies become more prevalent, operator performance will depend on the tools available to them to reduce cognitive load, or mental effort. One process requiring a substantial amount of cognitive effort is visual perception of a remote environment, or spatial awareness. The more naturally or intuitively an operator can visually explore their environment, the easier it will be to make high-level execution decisions (which object to grab) and low-level motion planning decisions (will the current trajectory cause collisions).

The goals of this project are:

  1. To identify the motions, behaviors, and spatial relationships that humans use to explore their environment.
  2. To recreate those motions, behaviors, and spatial relationships in a remote environment, and determine their effect on teleoperation performance.
  3. To design a semi-autonomous perception system that minimizes cognitive load while providing sufficient visual information to the operator.

Updates

Summer 2018 – Alexandra and Ozan have launched a human motion study to examine how humans use their own bodies to explore an environment through cameras. Contact arvaliton@wpi.edu to be a part of the study!

Human Perception Study

Projects for Interns, Masters Students, and MQPs

  1. Data Processing – Using Matlab, Python, or another high-level language to look for trends in data from the human motion study.
  2. Dynamic Head Camera Design – Creating a multi-DOF mount for a camera on Baxter, the humanoid robot. The platform should respond to natural body movement cues from the operator and move the camera intuitively.