Welcome









at RWTH Aachen University!
The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.
In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.
In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.
To this end, we are members of / associated with the following institutes and facilities:
![]() | |
---|---|
![]() | |
![]() | |
![]() | |
![]() |
Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.
News
• |
23rd ACM International Conference on Intelligent Virtual Agents (IVA23) Jonathan Ehret presented his paper entitled "Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents" at the 23rd ACM International Conference on Intelligent Virtual Agents. Furthermore, Andrea Bönsch presented two posters in the realm of virtual agents supporting scene exploration, either as conversing groups or as method for constrained navigation. |
Sept. 19, 2023 |
• |
The SPP AUDICTIVE converence took place and we contributed to the programm with two project presentations. |
June 30, 2023 |
• |
Christian Nowke receives doctoral degree from University of Trier Today, our colleague Christian Nowke successfully passed his Ph.D. defense and received a doctoral degree from the University of Trier for his thesis on "Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data". Congratulations! |
May 22, 2023 |
• |
Cover on the German GI Informatik Spektrum The cover of the current issue of Informatik Spektrum of the Gesellschaft für Informatik e.V. (GI) presents results of a project between the EON Energy Research Center and us on an important issue. The use of air filters in classrooms to fight the ongoing COVID-19 pandemic has been and continues to be a much-discussed topic. The cover shows a visualization in our aixCAVE, enabling an analysis of the temporal and spatial dynamics of aerosol concentration for each person in the respective room. Virtual reality is proving to be an effective tool for scientists here. It demonstrates the potential risk of aerosol dispersion in enclosed spaces with many people, which can be intuitively experienced even by laypersons. Additional information on this project is provided in the IT Center Annual Report 2020/2021, page 58f (german only). |
Dec. 16, 2022 |
• |
BugWright: Succesful on-site Field Tests In a nutshell, our EU project BugWright2 deals with the development of semiautonomous robots which are able to inspect and constantly monitor ship hulls of container ships for corrosion. In September, Simon Oehrl and Sebastian Pape travelled with colleagues from University of Trier to Metz, France to test their current implementations on-site. A short travel report is now available online. |
Oct. 11, 2022 |
• |
Immersive Art: Our Cooperation with Jana Rusch in Press Some time ago, the contemporary Belgian painter Jana Rusch approached us to explore our mutual interest in cooperating with her in the area of immersive art. Our colleagues Sevinc Eroglu and Patric Schmitz directly came up with many ideas. Thus, they teamed up with Jana and created Rilievo, a virtual authoring environment for artistic creation in VR, enabling Jana to convert her 2D drawings effortless into 3D volumetric representations while relief sculpting allows volume manipulations. This successful cooperation and the resulting framework have now been presented in the press. Click here for the online article (in German only). |
Oct. 5, 2022 |
Recent Publications
![]() Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents To be presented at: ACM International Conference on Intelligent Virtual Agents (IVA ’23) Taking turns in a conversation is a delicate interplay of various signals, which we as humans can easily decipher. Embodied conversational agents (ECAs) communicating with humans should leverage this ability for smooth and enjoyable conversations. Extensive research has analyzed human turn-taking cues, and attempts have been made to predict turn-taking based on observed cues. These cues vary from prosodic, semantic, and syntactic modulation over adapted gesture and gaze behavior to actively used respiration. However, when generating such behavior for social robots or ECAs, often only single modalities were considered, e.g., gazing. We strive to design a comprehensive system that produces cues for all non-verbal modalities: gestures, gaze, and breathing. The system provides valuable cues without requiring speech content adaptation. We evaluated our system in a VR-based user study with N = 32 participants executing two subsequent tasks. First, we asked them to listen to two ECAs taking turns in several conversations. Second, participants engaged in taking turns with one of the ECAs directly. We examined the system’s usability and the perceived social presence of the ECAs' turn-taking behavior, both with respect to each individual non-verbal modality and their interplay. While we found effects of gesture manipulation in interactions with the ECAs, no effects on social presence were found. ![]() |
![]() Effect of Head-Mounted Displays on Students’ Acquisition of Surgical Suturing Techniques Compared to an E-Learning and Tutor-Led Course: A Randomized Controlled Trial International Journal of Surgery
Background: Although surgical suturing is one of the most important basic skills, many medical school graduates do not acquire sufficient knowledge of it due to its lack of integration into the curriculum or a shortage of tutors. E-learning approaches attempt to address this issue but still rely on the involvement of tutors. Furthermore, the learning experience and visual-spatial ability appear to play a critical role in surgical skill acquisition. Virtual reality head-mounted displays (HMDs) could address this, but the benefits of immersive and stereoscopic learning of surgical suturing techniques are still unclear. Material and Methods: In this multi-arm randomized controlled trial, 150 novices participated. Three teaching modalities were compared: an e-learning course (monoscopic), an HMD-based course (stereoscopic, immersive), both self-directed, and a tutor-led course with feedback. Suturing performance was recorded by video camera both before and after course participation (>26 hours of video material) and assessed in a blinded fashion using the OSATS Global Rating Score (GRS). Furthermore, the optical flow of the videos was determined using an algorithm. The number of sutures performed was counted, visual-spatial ability was measured with the mental rotation test (MRT), and courses were assessed with questionnaires. Results: Students' self-assessment in the HMD-based course was comparable to that of the tutor-led course and significantly better than in the e-learning course (P=0.003). Course suitability was rated best for the tutor-led course (x=4.8), followed by the HMD-based (x=3.6) and e-learning (x=2.5) courses. The median GRS between courses was comparable (P=0.15) at 12.4 (95% CI 10.0–12.7) for the e-learning course, 14.1 (95% CI 13.0-15.0) for the HMD-based course, and 12.7 (95% CI 10.3-14.2) for the tutor-led course. However, the GRS was significantly correlated with the number of sutures performed during the training session (P=0.002), but not with visual-spatial ability (P=0.626). Optical flow (R2=0.15, P<0.001) and the number of sutures performed (R2=0.73, P<0.001) can be used as additional measures to GRS. Conclusion: The use of HMDs with stereoscopic and immersive video provides advantages in the learning experience and should be preferred over a traditional web application for e-learning. Contrary to expectations, feedback is not necessary for novices to achieve a sufficient level in suturing; only the number of surgical sutures performed during training is a good determinant of competence improvement. Nevertheless, feedback still enhances the learning experience. Therefore, automated assessment as an alternative feedback approach could further improve self-directed learning modalities. As a next step, the data from this study could be used to develop such automated AI-based assessments. ![]() |
![]() Towards Plausible Cognitive Research in Virtual Environments: The Effect of Audiovisual Cues on Short-Term Memory in Two Talker Conversations AUDICTIVE Conference 2023 When three or more people are involved in a conversation, often one conversational partner listens to what the others are saying and has to remember the conversational content. The setups in cognitive-psychological experiments often differ substantially from everyday listening situations by neglecting such audiovisual cues. The presence of speech-related audiovisual cues, such as the spatial position, and the appearance or non-verbal behavior of the conversing talkers may influence the listener's memory and comprehension of conversational content. In our project, we provide first insights into the contribution of acoustic and visual cues on short-term memory, and (social) presence. Analyses have shown that the memory performance varies with increasingly more plausible audiovisual characteristics. Furthermore, we have conducted a series of experiments regarding the influence of the visual reproduction medium (virtual reality vs. traditional computer screens) and spatial or content audio-visual mismatch on auditory short-term memory performance. Adding virtual embodiments to the talkers allowed us to conduct experiments on the influence of the fidelity of co-verbal gestures and turn-taking signals. Thus, we are able to provide a more plausible paradigm for investigating memory for two-talker conversations within an interactive audiovisual virtual reality environment. ![]() |