header

Welcome


bdrp


Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.

News

Martin Bellgardt receives doctoral degree from RWTH Aachen University

Today, our former colleague Martin Bellgardt successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Increasing Immersion in Machine Learning Pipelines for Mechanical Engineering". Congratulations!

April 30, 2025

Active Participation at 2024 IEEE VIS Conference (VIS 2024)

At this year's IEEE VIS Conference, several contributions of our visualization group were presented. Dr. Tim Gerrits chaired the 2024 SciVis Contest and presented two accepted papers: The short paper "DaVE - A Curated Database of Visualization Examples" by Jens Koenen, Marvin Petersen, Christoph Garth and Dr. Tim Gerrits as well as the contribution to the Workshop on Uncertainty Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles by Tadea Schmitz and Dr. Tim Gerrits, which was awarded the best paper award. Congratulations!

Oct. 22, 2024

Honorable Mention

One Best Paper Honorable Mention Award of the VRST 2024 was given to Sevinc Eroglu for her paper entitled “Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior”.

Oct. 11, 2024

Tim Gerrits as invited Keynote Speaker at the ParaView User Days in Lyon

ParaView, developed by Kitware is one of the most-used open-source visualization and analysis tools, widely used in research and industry. For the second edition of the ParaView user days, Dr. Tim Gerrits was invited to share his insights of developing and providing visualization within the academic communities.

Sept. 26, 2024

Invited Talk at Visual Computing for Biology and Medicine

This year's Eurographics Symposium on Visual Computing for Biologigy and Medicine VCBM in Magdeburg included a VCBM Fachgruppen Meeting with an invited presentation by Dr. Tim Gerrits on "Harnessing High Performance Infrastructure for Scientific Visualization of Medical Data".

Sept. 20, 2024

24th ACM International Conference on Intelligent Virtual Agents (IVA'24)

Together with Willem-Paul Brinkmann from TU Delft University our colleague Dr. Andrea Bönsch presented her work on German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions" at IVA 2024.

Sept. 16, 2024

Recent Publications

pubimg
Audiovisual angle and voice incongruence do not affect audiovisual verbal short-term memory in virtual reality

PLOS ONE

Virtual reality (VR) environments are frequently used in auditory and cognitive research to imitate real-life scenarios. The visual component in VR has the potential to affect how auditory information is processed, especially if incongruences between the visual and auditory information occur. This study investigated how audiovisual incongruence in VR implemented with a head-mounted display (HMD) affects verbal short-term memory compared to presentation of the same material over traditional computer monitors. Two experiments were conducted with both these display devices and two types of audiovisual incongruences: angle (Exp 1) and voice (Exp 2) incongruence. To quantify short-term memory, an audiovisual verbal serial recall (avVSR) task was developed where an embodied conversational agent (ECA) was animated to speak a digit sequence, which participants had to remember. The results showed no effect of the display devices on the proportion of correctly recalled digits overall, although subjective evaluations showed a higher sense of presence in the HMD condition. For the extreme conditions of angle incongruence in the computer monitor presentation, the proportion of correctly recalled digits increased marginally, presumably due to raised attention, but the effect size was negligible. Response times were not affected by incongruences in either display device across both experiments. These findings suggest that at least for the conditions studied here, the avVSR task is robust against angle and voice audiovisual incongruences in both HMD and computer monitor displays.

fadeout
 
pubimg
Poster: Listening Effort In Populated Audiovisual Scenes Under Plausible Room Acoustic Conditions

International Symposium on Auditory and Audiological Research (ISAAR) 2025

Listening effort in real-world environments is shaped by a complex interplay of factors, including time-varying background noise, visual and acoustic cues from both interlocutors and distractors, and the acoustic properties of the surrounding space. However, many studies investigating listening effort neglect both auditory and visual fidelity: static background noise is frequently used to avoid variability, talker visualization often disregards acoustic complexity, and experiments are commonly conducted in free-field environments without spatialized sound or realistic room acoustics. These limitations risk undermining the ecological validity of study outcomes. To address this, we developed an audiovisual virtual reality (VR) framework capable of rendering immersive, realistic scenes that integrate dynamic auditory and visual cues. Background noise included time-varying speech and non-speech sounds (e.g., conversations, appliances, traffic), spatialized in controlled acoustic environments. Participants were immersed in a visually rich VR setting populated with animated virtual agents. Listening effort was assessed using a heard-text-recall paradigm embedded in a dual-task design: participants listened to and remembered short stories told by two embodied conversational agents while simultaneously performing a vibrotactile secondary task. We compared three room acoustic conditions: a free-field environment, a room optimized for reverberation time, and an untreated reverberant room. Preliminary results from 30 participants (15 female; age range: 18–33; M = 25.1, SD = 3.05) indicated that room acoustics significantly affected both listening effort and short-term memory performance, with notable differences between free-field and reverberant conditions. These findings underscore the importance of realistic acoustic environments when investigating listening behavior in immersive audiovisual settings.

fadeout
 
pubimg
Interactive Streaming of 3D Scenes to Mobile Devices using Dual-Layer Image Warping and Loop-based Depth Reconstruction

2025 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2025)

While mobile devices have developed into hardware with advanced capabilities for rendering 3D gra-phics, they commonly lack the computational power to render large 3D scenes with complex lighting interactively. A prominent approach to tackle this is rendering required views on a remote server and streaming them to the mobile client. However, the rate at which servers can supply data is limited, e.g., by the available network speed, requiring image-based rendering techniques like image warping to compensate for the latency and allow a smooth user experience, especially in scenes where rapid user movement is essential. In this paper, we present a novel streaming approach designed to minimize arti-facts during the warping process by including an additional visibility layer that keeps track of occluded surfaces while allowing access to 360° views. In addition, we propose a novel mesh generation techni-que based on the detection of loops to reliably create a mesh that encodes the depth information requi-red for the image warping process. We demonstrate our approach in a number of complex scenes and compare it against existing works using two layers and one layer alone. The results indicate a significant reduction in computation time while achieving comparable or even better visual results when using our dual-layer approach.

fadeout
Datenschutzerklärung/Privacy Policy Home Visual Computing institute RWTH Aachen University