Welcome








at RWTH Aachen University!
The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.
In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.
In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.
To this end, we are members of / associated with the following institutes and facilities:
![]() | |
|---|---|
![]() | |
![]() | |
![]() | |
Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.
News
| • |
Martin Bellgardt receives doctoral degree from RWTH Aachen University Today, our former colleague Martin Bellgardt successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Increasing Immersion in Machine Learning Pipelines for Mechanical Engineering". Congratulations! |
April 30, 2025 |
| • |
Active Participation at 2024 IEEE VIS Conference (VIS 2024) At this year's IEEE VIS Conference, several contributions of our visualization group were presented. Dr. Tim Gerrits chaired the 2024 SciVis Contest and presented two accepted papers: The short paper "DaVE - A Curated Database of Visualization Examples" by Jens Koenen, Marvin Petersen, Christoph Garth and Dr. Tim Gerrits as well as the contribution to the Workshop on Uncertainty Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles by Tadea Schmitz and Dr. Tim Gerrits, which was awarded the best paper award. Congratulations! |
Oct. 22, 2024 |
| • |
Honorable Mention One Best Paper Honorable Mention Award of the VRST 2024 was given to Sevinc Eroglu for her paper entitled “Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior”. |
Oct. 11, 2024 |
| • |
Tim Gerrits as invited Keynote Speaker at the ParaView User Days in Lyon ParaView, developed by Kitware is one of the most-used open-source visualization and analysis tools, widely used in research and industry. For the second edition of the ParaView user days, Dr. Tim Gerrits was invited to share his insights of developing and providing visualization within the academic communities. |
Sept. 26, 2024 |
| • |
Invited Talk at Visual Computing for Biology and Medicine This year's Eurographics Symposium on Visual Computing for Biologigy and Medicine VCBM in Magdeburg included a VCBM Fachgruppen Meeting with an invited presentation by Dr. Tim Gerrits on "Harnessing High Performance Infrastructure for Scientific Visualization of Medical Data". |
Sept. 20, 2024 |
| • |
24th ACM International Conference on Intelligent Virtual Agents (IVA'24) Together with Willem-Paul Brinkmann from TU Delft University our colleague Dr. Andrea Bönsch presented her work on German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions" at IVA 2024. |
Sept. 16, 2024 |
Recent Publications
![]() Objectifying Social Presence: Evaluating Multimodal Degraders in ECAs Using the Heard Text Recall Paradigm IEEE Transactions on Visualization and Computer Graphics
Embodied conversational agents (ECAs) are key social interaction partners in various virtual reality (VR) applications, with their perceived social presence significantly influencing the quality and effectiveness of user-ECA interactions. This paper investigates the potential of the Heard Text Recall (HTR) paradigm as an indirect objective proxy for evaluating social presence, which is traditionally assessed through subjective questionnaires. To this end, we use the HTR task, which was primarily designed to assess memory performance in listening tasks, in a dual-task paradigm to assess cognitive spare capacity and correlate the latter with subjectively-rated social presence. As a prerequisite for this investigation, we introduce various co-verbal gesture modification techniques and assess their impact on the perceived naturalness of the presenting ECA, a crucial aspect fostering social presence. The main study then explores the applicability of HTR as a proxy for social presence by examining its effectiveness under different multimodal degraders of ECA behavior, including degraded co-verbal gestures, omitted lip synchronization, and the use of synthetic voices. The findings suggest that while HTR shows potential as an objective measure of social presence, its effectiveness is primarily evident in response to substantial changes in ECA behavior. Additionally, the study also highlights the negative effects of synthetic voices and inadequate lip synchronization on perceived social presence, emphasizing the need for careful consideration of these elements in ECA design.
|
Audiovisual angle and voice incongruence do not affect audiovisual verbal short-term memory in virtual reality PLOS ONE
Virtual reality (VR) environments are frequently used in auditory and cognitive research to imitate real-life scenarios. The visual component in VR has the potential to affect how auditory information is processed, especially if incongruences between the visual and auditory information occur. This study investigated how audiovisual incongruence in VR implemented with a head-mounted display (HMD) affects verbal short-term memory compared to presentation of the same material over traditional computer monitors. Two experiments were conducted with both these display devices and two types of audiovisual incongruences: angle (Exp 1) and voice (Exp 2) incongruence. To quantify short-term memory, an audiovisual verbal serial recall (avVSR) task was developed where an embodied conversational agent (ECA) was animated to speak a digit sequence, which participants had to remember. The results showed no effect of the display devices on the proportion of correctly recalled digits overall, although subjective evaluations showed a higher sense of presence in the HMD condition. For the extreme conditions of angle incongruence in the computer monitor presentation, the proportion of correctly recalled digits increased marginally, presumably due to raised attention, but the effect size was negligible. Response times were not affected by incongruences in either display device across both experiments. These findings suggest that at least for the conditions studied here, the avVSR task is robust against angle and voice audiovisual incongruences in both HMD and computer monitor displays.
|
![]() Demo: A Latency-Optimized LLM-based Multimodal Dialogue System for Embodied Conversational Agents in VR ACM International Conference on Intelligent Virtual Agents (IVA) Interactions with Embodied Conversational Agents (ECAs) are essential in many social Virtual Reality (VR) applications, highlighting the growing demand for free-flowing, context-aware conversations supported by low-latency, multimodal ECA responses. We introduce a modular, extensible framework powered by an Large Language Model (LLM), featuring streaming-based optimization techniques specially crafted for multimodal responses. Our system is capable of controlling self-behavior and task execution, in the form of moving through the Immersive Virtual Environment (IVE) directly controlled by the LLM, and is also capable of reacting to events in the IVE. In our study, our applied optimizations achieved a latency improvement of about (66%) on average compared to having no optimizations.
|





