Abstract
This work introduces an interaction technique to determine the user’s non-verbal deixis in Virtual Reality (VR) applications. We tailored it for multimodal speech & gesture interfaces (MMIs). Here, non-verbal deixis is often determined by the use of ray-casting due to its simplicity and intuitiveness. However, ray-casting’s rigidness and dichotomous nature pose limitations concerning the MMI’s flexibility and efficiency. In contrast, our technique considers a more comprehensive set of directional cues to determine non-verbal deixis and provides probabilistic output to tackle these limitations. We present a machine-learning-based reference implementation of our technique in VR and the results of a first performance benchmark. Future work includes an in-depth user study evaluating our technique’s user experience in an MMI.
Users
Please
log in to take part in the discussion (add own reviews or comments).