header

Welcome


bdrp


Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.

News

Andrea Bönsch receives doctoral degree from RWTH Aachen University

Today, our colleague Andrea Bönsch successfully passed her Ph.D. defense and received a doctoral degree from RWTH Aachen University for her thesis on "Social Wayfinding Strategies to Explore Immersive Virtual Environments". Congratulations!

June 26, 2024

Aachen Cathedral Demo: Exploring Aachen Cathedral UNESCO World Heritage Site in Virtual Reality

Read more

June 20, 2024

29th ACM Symposium on Virtual Reality Software and Technology (VRST 2023)

Together with Dr. Daniel Zielasko from the University of Trier our colleague Dr. Tim Weißker presented his paper entitled "Stay Vigilant: The Threat of a Replication Crisis in VR Locomotion Research" at the 29th ACM Symposium on Virtual Reality Software and Technology (VRST 2023). Their work was awarded with the Best Paper Award. Congratulations!

Oct. 12, 2023

23rd ACM International Conference on Intelligent Virtual Agents (IVA23)

Jonathan Ehret presented his paper entitled "Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents" at the 23rd ACM International Conference on Intelligent Virtual Agents. Furthermore, Andrea Bönsch presented two posters in the realm of virtual agents supporting scene exploration, either as conversing groups or as method for constrained navigation.

Sept. 19, 2023

The SPP AUDICTIVE converence took place and we contributed to the programm with two project presentations.

Read more

June 30, 2023

Christian Nowke receives doctoral degree from University of Trier

Today, our colleague Christian Nowke successfully passed his Ph.D. defense and received a doctoral degree from the University of Trier for his thesis on "Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data". Congratulations!

May 22, 2023

Recent Publications

pubimg
Wayfinding in Immersive Virtual Environments as Social Activity Supported by Virtual Agents

Frontiers in Virtual Reality, Section Virtual Reality and Human Behaviour

Effective navigation and interaction within immersive virtual environments rely on thorough scene exploration. Therefore, wayfinding is essential, assisting users in comprehending their surroundings, planning routes, and making informed decisions. Based on real-life observations, wayfinding is, thereby, not only a cognitive process but also a social activity profoundly influenced by the presence and behaviors of others. In virtual environments, these 'others' are virtual agents (VAs), defined as anthropomorphic computer-controlled characters, who enliven the environment and can serve as background characters or direct interaction partners. However, little research has been done to explore how to efficiently use VAs as social wayfinding support. In this paper, we aim to assess and contrast user experience, user comfort, and the acquisition of scene knowledge through a between-subjects study involving n = 60 participants across three distinct wayfinding conditions in one slightly populated urban environment: (i) unsupported wayfinding, (ii) strong social wayfinding using a virtual supporter who incorporates guiding and accompanying elements while directly impacting the participants' wayfinding decisions, and (iii) weak social wayfinding using flows of VAs that subtly influence the participants' wayfinding decisions by their locomotion behavior. Our work is the first to compare the impact of VAs' behavior in virtual reality on users' scene exploration, including spatial awareness, scene comprehension, and comfort. The results show the general utility of social wayfinding support, while underscoring the superiority of the strong type. Nevertheless, further exploration of weak social wayfinding as a promising technique is needed. Thus, our work contributes to the enhancement of VAs as advanced user interfaces, increasing user acceptance and usability.

fadeout
 
pubimg
A Lecturer’s Voice Quality and its Effect on Memory, Listening Effort, and Perception in a VR Environment

Scientific Reports

Many lecturers develop voice problems, such as hoarseness. Nevertheless, research on how voice quality influences listeners’ perception, comprehension, and retention of spoken language is limited to a small number of audio-only experiments. We aimed to address this gap by using audio-visual virtual reality (VR) to investigate the impact of a lecturer’s hoarseness on university students’ heard text recall, listening effort, and listening impression. Fifty participants were immersed in a virtual seminar room, where they engaged in a Dual-Task Paradigm. They listened to narratives presented by a virtual female professor, who spoke in either a typical or hoarse voice. Simultaneously, participants performed a secondary task. Results revealed significantly prolonged secondary-task response times with the hoarse voice compared to the typical voice, indicating increased listening effort. Subjectively, participants rated the hoarse voice as more annoying, effortful to listen to, and impeding for their cognitive performance. No effect of voice quality was found on heard text recall, suggesting that, while hoarseness may compromise certain aspects of spoken language processing, this might not necessarily result in reduced information retention. In summary, our findings underscore the importance of promoting vocal health among lecturers, which may contribute to enhanced listening conditions in learning spaces.

fadeout
 
pubimg
IntenSelect+: Enhancing Score-Based Selection in Virtual Reality

2024 IEEE Transactions on Visualization and Computer Graphics

Object selection in virtual environments is one of the most common and recurring interaction tasks. Therefore, the used technique can critically influence a system’s overall efficiency and usability. IntenSelect is a scoring-based selection-by-volume technique that was shown to offer improved selection performance over conventional raycasting in virtual reality. This initial method, however, is most pronounced for small spherical objects that converge to a point-like appearance only, is challenging to parameterize, and has inherent limitations in terms of flexibility. We present an enhanced version of IntenSelect called IntenSelect+ designed to overcome multiple shortcomings of the original IntenSelect approach. In an empirical within-subjects user study with 42 participants, we compared IntenSelect+ to IntenSelect and conventional raycasting on various complex object configurations motivated by prior work. In addition to replicating the previously shown benefits of IntenSelect over raycasting, our results demonstrate significant advantages of IntenSelect+ over IntenSelect regarding selection performance, task load, and user experience. We, therefore, conclude that IntenSelect+ is a promising enhancement of the original approach that enables faster, more precise, and more comfortable object selection in immersive virtual environments.

fadeout
Disclaimer Home Visual Computing institute RWTH Aachen University