Profile
![]() |
Andrew Steven Puika, M.Sc. |
Publications
Comparison of a speech-based and a pie-menu-based interaction metaphor for application control
Choosing an adequate system control technique is crucial to support complex interaction scenarios in virtual reality applications. In this work, we compare an existing hierarchical pie-menu-based approach with a speech-recognition-based one in terms of task performance and user experience in a formal user study. As testbed, we use a factory planning application featuring a large set of system control options.
@INPROCEEDINGS{Pick:691795,
author = {Pick, Sebastian and Puika, Andrew S. and Kuhlen, Torsten},
title = {{C}omparison of a speech-based and a pie-menu-based
interaction metaphor for application control},
address = {Piscataway, NJ},
publisher = {IEEE},
reportid = {RWTH-2017-06169},
pages = {381-382},
year = {2017},
comment = {2017 IEEE Virtual Reality (VR) : proceedings : March 18-22,
2017, Los Angeles, CA, USA / Evan Suma Rosenberg, David M.
Krum, Zachary Wartell, Betty Mohler, Sabarish V. Babu, Frank
Steinicke, and Victoria Interrante ; sponsored by IEEE
Computer Society, Visialization and Graphics Technical
Committee},
booktitle = {2017 IEEE Virtual Reality (VR) :
proceedings : March 18-22, 2017, Los
Angeles, CA, USA / Evan Suma Rosenberg,
David M. Krum, Zachary Wartell, Betty
Mohler, Sabarish V. Babu, Frank
Steinicke, and Victoria Interrante ;
sponsored by IEEE Computer Society,
Visialization and Graphics Technical
Committee},
month = {Mar},
date = {2017-03-18},
organization = {2017 IEEE Virtual Reality, Los
Angeles, CA (USA), 18 Mar 2017 - 22 Mar
2017},
cin = {124620 / 120000 / 080025},
cid = {$I:(DE-82)124620_20151124$ / $I:(DE-82)120000_20140620$ /
$I:(DE-82)080025_20140620$},
pnm = {B-1 - Virtual Production Intelligence},
pid = {G:(DE-82)X080025-B-1},
typ = {PUB:(DE-HGF)7 / PUB:(DE-HGF)8},
UT = {WOS:000403149400114},
doi = {10.1109/VR.2017.7892336},
url = {http://publications.rwth-aachen.de/record/691795},
}
SWIFTER: Design and Evaluation of a Speech-based Text Input Metaphor for Immersive Virtual Environments

Text input is an important part of the data annotation process, where text is used to capture ideas and comments. For text entry in immersive virtual environments, for which standard keyboards usually do not work, various approaches have been proposed. While these solutions have mostly proven effective, there still remain certain shortcomings making further investigations worthwhile. Motivated by recent research, we propose the speech-based multimodal text entry system SWIFTER, which strives for simplicity while maintaining good performance. In an initial user study, we compared our approach to smartphone-based text entry within a CAVE-like virtual environment. Results indicate that SWIFTER reaches an average input rate of 23.6 words per minute and is positively received by users in terms of user experience.