Communication Via Motion In Challenging Environments
The problem of communicating from a robot to a human underwater is a challenging one, as the primary vectors of communication (voice and display monitors) are unusable underwater. A method of communicating information that has not been explored thoroughly is that of a robot's motion. This project propose the use of a unique system for underwater communication between robots and humans, using the motion of the robot. We do this by proposing a group of kinemes, motions with an associated infromational meaning, and producing those kinemes with Aqua, our AUV.
The initial version of this system was implemented in simmulation using Unreal Engine 4. To compare it to other possible methods of communication, a baseline system was prepared using an Arduino to flash colored lights. A small study consisting of 24 participants was conducted, validating the use of motion as a vector for robot-to-human communication.
Pilot Study
Participants in the study succeeded in correctly understanding the meaning of the robot's motion more frequently and with higher confidence than with lights, and grew increasingly successful with more instruction beforehand. Comparisons of the accuracy, confidence, operational accuracy (accuracy on answers rated 3 or higher in confidence), and time taken to answer at three different education levels can be seen below.
Further, comparisons between the kineme and light code communcation system can be seen for each individual kineme.
Statistically significant increases in accuracy and operational accuracy can be found for the kineme system for almost every education level, testing at a significance level of 0.05. Furthermore, participants preferred the kineme-based system over the lights system, especially when considering its use at a distance or underwater.
More information on the reuslts and significance testing of the kineme and light codes system can be found in the relevant papers on the Publications page.
Future Work
This concept will be applied to more robots, both 6DOF and 3DOF, improved with the addition of suplementary lights and tonal sound interaction, grounded by the addition of gaze direction estimation and other such interaction guides, and further validated by conducting larger and higher-fidelity studies.