Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.


21th ACM International Conference on Intelligent Virtual Agents (IVA21)

Andrea Bönsch presented a paper at the 21th ACM International Conference on Intelligent Virtual Agents. Additionally, her student David Hashem submitted a GALA video showcasing the respective application of a virtual museum's curator who either guides the user or accompanies the user on his or free exploration. The video won the ACM IVA 2021 GALA Audience Award. Congratulations!

Sept. 17, 2021

ACM Symposium on Applied Perception (SAP2021)

Jonathan Ehret presented joined work with the RWTH Institute for Hearing Technology and Acoustics and the Cologne IfL Phonetik on the Influence of Prosody and Embodiment on the Perceived Naturalness of a Conversational Agents’ Speech. During the peer-reviewing process, the paper was invited and accepted to the Journal Transactions on Applied Perception (TAP). Congratulations!

Sept. 16, 2021


Andrea Bönsch presented a poster on Indirect User Guidance by Pedestrians in Virtual Environments during the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2021).

Sept. 9, 2021

Kármán Conference: European Meeting on Intermediate Filaments

Prof. Reinhard Windoffer of the Institute of Molecular and Cellular Anatomy (MOCA) presented his group's research on Keratin intermediate filaments. Our group supported this work with an immersive visualization of the cytoskeletons.

Sept. 8, 2021

Med&BioVis Workshop 2021

Marcel Krüger gave a talk introducing "Insite - A Pipeline for the Interactive Analysis of Neuronal Network Simulations via NEST, TVB, and ARBOR" on the Med&BioVis Workshop of GI Fachgruppe Visual Computing in Biology and Medicine.

Sept. 3, 2021

DAGA 2021

Our group was involved in three presentations at this year's DAGA, 47th Annual Conference on Acoustics. While Jonathan Ehret talked about "Speech Source Directivity for Embodied Conversational Agents", our colleagues from RWTH Institute for Hearing Technology and Acoustics presented joint work on "Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents" and our AUDICTIVE project on listening to, and remembering conversations between two talkers.

Aug. 18, 2021

Recent Publications

Quantitative Mapping of Keratin Networks in 3D


Mechanobiology requires precise quantitative information on processes taking place in specific 3D microenvironments. Connecting the abundance of microscopical, molecular, biochemical, and cell mechanical data with defined topologies has turned out to be extremely difficult. Establishing such structural and functional 3D maps needed for biophysical modeling is a particular challenge for the cytoskeleton, which consists of long and interwoven filamentous polymers coordinating subcellular processes and interactions of cells with their environment. To date, useful tools are available for the segmentation and modeling of actin filaments and microtubules but comprehensive tools for the mapping of intermediate filament organization are still lacking. In this work, we describe a workflow to model and examine the complete 3D arrangement of the keratin intermediate filament cytoskeleton in canine, murine, and human epithelial cells both, in vitro and in vivo. Numerical models are derived from confocal Airyscan high-resolution 3D imaging of fluorescence-tagged keratin filaments. They are interrogated and annotated at different length scales using different modes of visualization including immersive virtual reality. In this way, information is provided on network organization at the subcellular level including mesh arrangement, density, and isotropic configuration as well as details on filament morphology such as bundling, curvature, and orientation. We show that the comparison of these parameters helps to identify, in quantitative terms, similarities and differences of keratin network organization in epithelial cell types defining subcellular domains, notably basal, apical, lateral, and perinuclear systems. The described approach and the presented data are pivotal for generating mechanobiological models that can be experimentally tested.

Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study

JMIR Serious Games 2022

**Background:** Although nearly one-third of the world’s disease burden requires surgical care, only a small proportion of digital health applications are directly used in the surgical field. In the coming decades, the application of augmented reality (AR) with a new generation of optical-see-through head-mounted displays (OST-HMDs) like the HoloLens (Microsoft Corp) has the potential to bring digital health into the surgical field. However, for the application to be performed on a living person, proof of performance must first be provided due to regulatory requirements. In this regard, cadaver studies could provide initial evidence. **Objective:** The goal of the research was to develop an open-source system for AR-based surgery on human cadavers using freely available technologies. **Methods:** We tested our system using an easy-to-understand scenario in which fractured zygomatic arches of the face had to be repositioned with visual and auditory feedback to the investigators using a HoloLens. Results were verified with postoperative imaging and assessed in a blinded fashion by 2 investigators. The developed system and scenario were qualitatively evaluated by consensus interview and individual questionnaires. **Results:** The development and implementation of our system was feasible and could be realized in the course of a cadaver study. The AR system was found helpful by the investigators for spatial perception in addition to the combination of visual as well as auditory feedback. The surgical end point could be determined metrically as well as by assessment. **Conclusions:** The development and application of an AR-based surgical system using freely available technologies to perform OST-HMD–guided surgical procedures in cadavers is feasible. Cadaver studies are suitable for OST-HMD–guided interventions to measure a surgical end point and provide an initial data foundation for future clinical trials. The availability of free systems for researchers could be helpful for a possible translation process from digital health to AR-based surgery using OST-HMDs in the operating theater via cadaver studies.

The aixCAVE at RWTH Aachen University

In Chapter 09 "VR/AR Use Cases" of "Virtual and Augmented Reality - Foundations and Methods of Extended Realities"

At a large technical university like RWTH Aachen, there is enormous potential to use VR as a tool in research. In contrast to applications from the entertainment sector, many scientific application scenarios - for example, a 3D analysis of result data from simulated flows - not only depend on a high degree of immersion, but also on a high resolution and excellent image quality of the display. In addition, the visual analysis of scientific data is often carried out and discussed in smaller teams. For these reasons, but also for simple ergonomic aspects (comfort, cybersickness), many technical and scientific VR applications cannot just be implemented on the basis of head-mounted displays. To this day, in VR Labs of universities and research institutions, it is therefore desirable to install immersive large-screen rear projection systems (CAVEs) in order to adequately support the scientists. Due to the high investment costs, such systems are used at larger universities such as Aachen, Cologne, Munich, or Stuttgart, often operated by the computing centers as a central infrastructure accessible to all scientists at the university.

Disclaimer Home Visual Computing institute RWTH Aachen University