header

Welcome


bdrp


Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.

News

Sevinc Eroglu receives doctoral degree from RWTH Aachen University

Today, our former colleague Sevinc Eroglu successfully passed her Ph.D. defense and received a doctoral degree from RWTH Aachen University for her thesis on "Building Immersive Worlds: Authoring Content and Interactivity in VR". Congratulations!

Jan. 9, 2026

Jonathan Ehret receives doctoral degree from RWTH Aachen University

Today, our former colleague Jonathan Ehret successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Enhancing Social Presence in Embodied Conversational Agents: A Multimodal Approach to Natural Communication". Congratulations!

Dec. 8, 2025

Martin Bellgardt receives doctoral degree from RWTH Aachen University

Today, our former colleague Martin Bellgardt successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Increasing Immersion in Machine Learning Pipelines for Mechanical Engineering". Congratulations!

April 30, 2025

Active Participation at 2024 IEEE VIS Conference (VIS 2024)

At this year's IEEE VIS Conference, several contributions of our visualization group were presented. Dr. Tim Gerrits chaired the 2024 SciVis Contest and presented two accepted papers: The short paper "DaVE - A Curated Database of Visualization Examples" by Jens Koenen, Marvin Petersen, Christoph Garth and Dr. Tim Gerrits as well as the contribution to the Workshop on Uncertainty Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles by Tadea Schmitz and Dr. Tim Gerrits, which was awarded the best paper award. Congratulations!

Oct. 22, 2024

Honorable Mention

One Best Paper Honorable Mention Award of the VRST 2024 was given to Sevinc Eroglu for her paper entitled “Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior”.

Oct. 11, 2024

Tim Gerrits as invited Keynote Speaker at the ParaView User Days in Lyon

ParaView, developed by Kitware is one of the most-used open-source visualization and analysis tools, widely used in research and industry. For the second edition of the ParaView user days, Dr. Tim Gerrits was invited to share his insights of developing and providing visualization within the academic communities.

Sept. 26, 2024

Recent Publications

pubimg
Objectifying Social Presence: Evaluating Multimodal Degraders in ECAs Using the Heard Text Recall Paradigm

IEEE Transactions on Visualization and Computer Graphics

Embodied conversational agents (ECAs) are key social interaction partners in various virtual reality (VR) applications, with their perceived social presence significantly influencing the quality and effectiveness of user-ECA interactions. This paper investigates the potential of the Heard Text Recall (HTR) paradigm as an indirect objective proxy for evaluating social presence, which is traditionally assessed through subjective questionnaires. To this end, we use the HTR task, which was primarily designed to assess memory performance in listening tasks, in a dual-task paradigm to assess cognitive spare capacity and correlate the latter with subjectively-rated social presence. As a prerequisite for this investigation, we introduce various co-verbal gesture modification techniques and assess their impact on the perceived naturalness of the presenting ECA, a crucial aspect fostering social presence. The main study then explores the applicability of HTR as a proxy for social presence by examining its effectiveness under different multimodal degraders of ECA behavior, including degraded co-verbal gestures, omitted lip synchronization, and the use of synthetic voices. The findings suggest that while HTR shows potential as an objective measure of social presence, its effectiveness is primarily evident in response to substantial changes in ECA behavior. Additionally, the study also highlights the negative effects of synthetic voices and inadequate lip synchronization on perceived social presence, emphasizing the need for careful consideration of these elements in ECA design.

fadeout
 
pubimg
Heard-Text Recall and Listening Effort under Irrelevant Speech and Pseudo-Speech in Virtual Reality

to be published in: Acta Acustica united with Acustica

*Introduction*: Verbal communication depends on a listener’s ability to accurately comprehend and recall information conveyed in a conversation. The heard-text recall (HTR) paradigm can be used in a dual-task design to assess both memory performance and listening effort. In contrast to traditional tasks such as serial recall, this paradigm uses running speech to simulate a conversation between two talkers. Thereby, it allows for talker visualization in virtual reality (VR), providing co-verbal visual cues like lip-movements, turn-taking cues, and gaze behavior. While this paradigm has been investigated under pink noise, the impact of more realistic irrelevant stimuli, such as speech, that provide temporal fluctuations and meaning compared to noise, remains unexplored. *Methods*: In this study (N = 24), the HTR task was administered in an immersive VR environment under three noise conditions: silence, pseudo-speech, and speech. A vibrotactile secondary task was administered to quantify listening effort. *Results*: The results indicate an effect of irrelevant speech on memory and speech comprehension as well as secondary task performance, with a stronger impact of speech relative to pseudo-speech. *Discussion*: The study validates the sensitivity of the HTR in a dual-task design to background speech stimuli and highlights the relevance of linguistic interference-by-process for listening effort, speech comprehension, and memory.

fadeout
 
pubimg
Fostering Engagement through a Latency-Optimized LLM-based Dialogue System for Multimodal ECA Responses

To be presented at: IEEE AIxVR 2026

Interactions with Embodied Conversational Agents (ECAs) are an integral part of many social Virtual Reality (VR) applications, increasing the need for free, context-sensitive conversations characterized by latency-optimized and multimodal ECA responses. Our presented methodology consists of three interdependent steps: We first present a holistic framework driven by a Large Language Model (LLM), which integrates existing technologies into a modular and extendable system that is developer-friendly and suitable for diverse use-cases. Building on this foundation, our second step comprises streaming-based optimizations that effectively reduce measured response latency, thereby facilitating real-time conversations. Finally, we conduct a comparative analysis between our latency optimized LLM-driven ECA and a conventional button-based Wizard-of-Oz (WoZ) system to evaluate performance differences in user engagement. Our insights reveal that users perceive our LLM-driven ECA as significantly more natural, competent, and trustworthy than their WoZ counterparts, despite objective measures indicating slightly higher latency in technical performance. These findings underscore the potential of LLMs to enhance engagement in ECAs within VR environments.

fadeout
Datenschutzerklärung/Privacy Policy Home Visual Computing institute RWTH Aachen University