Stanford Studies Control Schemes for Three-armed Avatars in VR
Researchers at Stanford think that having a third arm in VR could make you a more efficient (virtual) human. So they’ve set out to learn what they can about the most effective means of controlling an extra limb in VR.
Thanks to high quality VR motion controllers, computer users are beginning to reach into the digital world in an entirely new and tangible way. But this is virtual reality after all, and we can do whatever we want, so why be restricted to a mere two arms? Researchers at Stanford’s Virtual Human Interaction Lab have finally said “enough is enough,” and have begun studying which control schemes are most effective for use with a virtual third arm.
Having only ever lived with two arms, a virtual third arm would need to be easy to learn to control to be of any use. In a paper published in the journal Presence: Teleoperators and Virtual Environments, Bireswar Laha, Jeremy N. Bailenson, Andrea Stevenson Won, and Jakki O. Bailey defined three methods of controlling a third arm that extends outward from the virtual user’s chest.
The first method controls the arm via the user’s head. Turning and tilting the head causes the arm move in a relatively intuitive way. The second method, which the researchers call ‘Bimanual’, uses the horizontal rotation of one controller combined with the vertical rotation of a second controller to act as inputs for the arm. And the third method, called ‘Unimanual’ uses the horizontal and vertical rotation of just a single controller to drive the third arm.
The paper, called Evaluating Control Schemes for the Third Arm of an Avatar, details an experiment the researchers designed to test the efficacy of each control scheme in virtual reality. The task set forth is for the user to tap a randomly changing white block among a grid of blocks, with one grid for the left arm, another for the right arm, and a third set that’s further away and only reachable by the third arm. The paper’s abstract surmises:
Recent research on immersive virtual environments has shown that users can not only inhabit and identify with novel avatars with novel body extensions, but also learn to control novel appendages in ways beneficial to the task at hand. But how different control schemas might affect task performance and body ownership with novel avatar appendages has yet to be explored. In this article, we discuss the design of control schemas based on the theory and practice of 3D interactions applied to novel avatar bodies. Using a within-subjects design, we compare the effects of controlling a third arm with three different control schemas (bimanual, unimanual, and head-control) on task performance, simulator sickness, presence, and user preference. Both the unimanual and the head-control were significantly faster, elicited significantly higher body ownership, and were preferred over the bimanual control schema. Participants felt that the bimanual control was significantly more difficult than the unimanual control, and elicited less appendage agency than the head-control. There were no differences in reported simulator sickness. We discuss the implications of these results for interface design.
Ultimately, the idea of a third arm in VR is something of a metaphor. When you break it down, the study is really about VR input schemes which use traditionally non-input motions as input. As abstract as it is, a third arm is more immediately understandable input concept, because we already have arms and know how they work and what they’re good at. But this research is easily applied to other input modalities, like the commonly seen laser-pointer interface and gaze-based interfaces that are already employed in the VR space.