Profile
Prof. Dr. Torsten Wolfgang Kuhlen |
Publications
Wayfinding in Immersive Virtual Environments as Social Activity Supported by Virtual Agents
Effective navigation and interaction within immersive virtual environments rely on thorough scene exploration. Therefore, wayfinding is essential, assisting users in comprehending their surroundings, planning routes, and making informed decisions. Based on real-life observations, wayfinding is, thereby, not only a cognitive process but also a social activity profoundly influenced by the presence and behaviors of others. In virtual environments, these 'others' are virtual agents (VAs), defined as anthropomorphic computer-controlled characters, who enliven the environment and can serve as background characters or direct interaction partners. However, little research has been done to explore how to efficiently use VAs as social wayfinding support. In this paper, we aim to assess and contrast user experience, user comfort, and the acquisition of scene knowledge through a between-subjects study involving n = 60 participants across three distinct wayfinding conditions in one slightly populated urban environment: (i) unsupported wayfinding, (ii) strong social wayfinding using a virtual supporter who incorporates guiding and accompanying elements while directly impacting the participants' wayfinding decisions, and (iii) weak social wayfinding using flows of VAs that subtly influence the participants' wayfinding decisions by their locomotion behavior. Our work is the first to compare the impact of VAs' behavior in virtual reality on users' scene exploration, including spatial awareness, scene comprehension, and comfort. The results show the general utility of social wayfinding support, while underscoring the superiority of the strong type. Nevertheless, further exploration of weak social wayfinding as a promising technique is needed. Thus, our work contributes to the enhancement of VAs as advanced user interfaces, increasing user acceptance and usability.
@article{Boensch2024,
title={Wayfinding in Immersive Virtual Environments as Social Activity Supported by Virtual Agents},
author={B{\"o}nsch, Andrea and Ehret, Jonathan and Rupp, Daniel and Kuhlen, Torsten W.},
journal={Frontiers in Virtual Reality},
volume={4},
year={2024},
pages={1334795},
publisher={Frontiers},
doi={10.3389/frvir.2023.1334795}
}
Virtual Reality as a Tool for Monitoring Additive Manufacturing Processes via Digital Shadows
We present a data acquisition and visualization pipeline that allows experts to monitor additive manufacturing processes, in particular laser metal deposition with wire (LMD-w) processes, in immersive virtual reality. Our virtual environment consists of a digital shadow of the LMD-w production site enriched with additional measurement data shown on both static as well as handheld virtual displays. Users can explore the production site by enhanced teleportation capabilities that enable them to change their scale as well as their elevation above the ground plane. In an exploratory user study with 22 participants, we demonstrate that our system is generally suitable for the supervision of LMD-w processes while generating low task load and cybersickness. Therefore, it serves as a first promising step towards the successful application of virtual reality technology in the comparatively young field of additive manufacturing.
Semi-Automated Guided Teleportation through Immersive Virtual Environments
Immersive knowledge spaces like museums or cultural sites are often explored by traversing pre-defined paths that are curated to unfold a specific educational narrative. To support this type of guided exploration in VR, we present a semi-automated, handsfree path traversal technique based on teleportation that features a slow-paced interaction workflow targeted at fostering knowledge acquisition and maintaining spatial awareness. In an empirical user study with 34 participants, we evaluated two variations of our technique, differing in the presence or absence of intermediate teleportation points between the main points of interest along the route. While visiting additional intermediate points was objectively less efficient, our results indicate significant benefits of this approach regarding the user’s spatial awareness and perception of interface dependability. However, the user’s perception of flow, presence, attractiveness, perspicuity, and stimulation did not differ significantly. The overall positive reception of our approach encourages further research into semi-automated locomotion based on teleportation and provides initial insights into the design space of successful techniques in this domain.
A Lecturer’s Voice Quality and its Effect on Memory, Listening Effort, and Perception in a VR Environment
Many lecturers develop voice problems, such as hoarseness. Nevertheless, research on how voice quality influences listeners’ perception, comprehension, and retention of spoken language is limited to a small number of audio-only experiments. We aimed to address this gap by using audio-visual virtual reality (VR) to investigate the impact of a lecturer’s hoarseness on university students’ heard text recall, listening effort, and listening impression. Fifty participants were immersed in a virtual seminar room, where they engaged in a Dual-Task Paradigm. They listened to narratives presented by a virtual female professor, who spoke in either a typical or hoarse voice. Simultaneously, participants performed a secondary task. Results revealed significantly prolonged secondary-task response times with the hoarse voice compared to the typical voice, indicating increased listening effort. Subjectively, participants rated the hoarse voice as more annoying, effortful to listen to, and impeding for their cognitive performance. No effect of voice quality was found on heard text recall, suggesting that, while hoarseness may compromise certain aspects of spoken language processing, this might not necessarily result in reduced information retention. In summary, our findings underscore the importance of promoting vocal health among lecturers, which may contribute to enhanced listening conditions in learning spaces.
» Show BibTeX
@article{Schiller2024,
author = {Isabel S. Schiller and Carolin Breuer and Lukas Aspöck and
Jonathan Ehret and Andrea Bönsch and Torsten W. Kuhlen and Janina Fels and
Sabine J. Schlittmeier},
doi = {10.1038/s41598-024-63097-6},
issn = {2045-2322},
issue = {1},
journal = {Scientific Reports},
keywords = {Audio-visual language processing,Virtual reality,Voice
quality},
month = {5},
pages = {12407},
pmid = {38811832},
title = {A lecturer’s voice quality and its effect on memory, listening
effort, and perception in a VR environment},
volume = {14},
url = {https://www.nature.com/articles/s41598-024-63097-6},
year = {2024},
}
IntenSelect+: Enhancing Score-Based Selection in Virtual Reality
Object selection in virtual environments is one of the most common and recurring interaction tasks. Therefore, the used technique can critically influence a system’s overall efficiency and usability. IntenSelect is a scoring-based selection-by-volume technique that was shown to offer improved selection performance over conventional raycasting in virtual reality. This initial method, however, is most pronounced for small spherical objects that converge to a point-like appearance only, is challenging to parameterize, and has inherent limitations in terms of flexibility. We present an enhanced version of IntenSelect called IntenSelect+ designed to overcome multiple shortcomings of the original IntenSelect approach. In an empirical within-subjects user study with 42 participants, we compared IntenSelect+ to IntenSelect and conventional raycasting on various complex object configurations motivated by prior work. In addition to replicating the previously shown benefits of IntenSelect over raycasting, our results demonstrate significant advantages of IntenSelect+ over IntenSelect regarding selection performance, task load, and user experience. We, therefore, conclude that IntenSelect+ is a promising enhancement of the original approach that enables faster, more precise, and more comfortable object selection in immersive virtual environments.
» Show BibTeX
@ARTICLE{10459000,
author={Krüger, Marcel and Gerrits, Tim and Römer, Timon and Kuhlen, Torsten and Weissker, Tim},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={IntenSelect+: Enhancing Score-Based Selection in Virtual Reality},
year={2024},
volume={},
number={},
pages={1-10},
keywords={Visualization;Three-dimensional displays;Task analysis;Usability;Virtual environments;Shape;Engines;Virtual Reality;3D User Interfaces;3D Interaction;Selection;Score-Based Selection;Temporal Selection;IntenSelect},
Authentication in Immersive Virtual Environments through Gesture-Based Interaction with a Virtual Agent
Authentication poses a significant challenge in VR applications, as conventional methods, such as text input for usernames and passwords, prove cumbersome and unnatural in immersive virtual environments. Alternatives such as password managers or two-factor authentication may necessitate users to disengage from the virtual experience by removing their headsets. Consequently, we present an innovative system that utilizes virtual agents (VAs) as interaction partners, enabling users to authenticate naturally through a set of ten gestures, such as high fives, fist bumps, or waving. By combining these gestures, users can create personalized authentications akin to PINs, potentially enhancing security without compromising the immersive experience. To gain first insights into the suitability of this authentication process, we conducted a formal expert review with five participants and compared our system to a virtual keypad authentication approach. While our results show that the effectiveness of a VA-mediated gesture-based authentication system is still limited, they motivate further research in this area.
VRScenarioBuilder: Free-Hand Immersive Authoring Tool for Scenario-based Testing of Automated Vehicles
Virtual Reality has become an important medium in the automotive industry, providing engineers with a simulated platform to actively engage with and evaluate realistic driving scenarios for testing and validating automated vehicles. However, engineers are often restricted to using 2D desktop-based tools for designing driving scenarios, which can result in inefficiencies in the development and testing cycles. To this end, we present VRScenarioBuilder, an immersive authoring tool that enables engineers to create and modify dynamic driving scenarios directly in VR using free-hand interactions. Our tool features a natural user interface that enables users to create scenarios by using drag-and-drop building blocks. To evaluate the interface components and interactions, we conducted a user study with VR experts. Our findings highlight the effectiveness and potential improvements of our tool. We have further identified future research directions, such as exploring the spatial arrangement of the interface components and managing lengthy blocks.
Demo: Webcam-based Hand- and Object-Tracking for a Desktop Workspace in Virtual Reality
As virtual reality overlays the user’s view, challenges arise when interaction with their physical surroundings is still needed. In a seated workspace environment interaction with the physical surroundings can be essential to enable productive working. Interaction with e.g. physical mouse and keyboard can be difficult when no visual reference is given to where they are placed. This demo shows a combination of computer vision-based marker detection with machine-learning-based hand detection to bring users’ hands and arbitrary objects into VR.
@inproceedings{10.1145/3677386.3688879,
author = {Pape, Sebastian and Beierle, Jonathan Heinrich and Kuhlen, Torsten Wolfgang and Weissker, Tim},
title = {Webcam-based Hand- and Object-Tracking for a Desktop Workspace in Virtual Reality},
year = {2024},
isbn = {9798400710889},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3677386.3688879},
doi = {10.1145/3677386.3688879},
abstract = {As virtual reality overlays the user’s view, challenges arise when interaction with their physical surroundings is still needed. In a seated workspace environment interaction with the physical surroundings can be essential to enable productive working. Interaction with e.g. physical mouse and keyboard can be difficult when no visual reference is given to where they are placed. This demo shows a combination of computer vision-based marker detection with machine-learning-based hand detection to bring users’ hands and arbitrary objects into VR.},
booktitle = {Proceedings of the 2024 ACM Symposium on Spatial User Interaction},
articleno = {64},
numpages = {2},
keywords = {Hand-Tracking, Object-Tracking, Physical Props, Virtual Reality, Webcam},
location = {Trier, Germany},
series = {SUI '24}
On the Computation of User Placements for Virtual Formation Adjustments during Group Navigation
Several group navigation techniques enable a single navigator to control travel for all group members simultaneously in social virtual reality. A key aspect of this process is the ability to rearrange the group into a new formation to facilitate the joint observation of the scene or to avoid obstacles on the way. However, the question of how users should be distributed within the new formation to create an intuitive transition that minimizes disruptions of ongoing social activities is currently not explored. In this paper, we begin to close this gap by introducing four user placement strategies based on mathematical considerations, discussing their benefits and drawbacks, and sketching further novel ideas to approach this topic from different angles in future work. Our work, therefore, contributes to the overarching goal of making group interactions in social virtual reality more intuitive and comfortable for the involved users.
» Show BibTeX
@INPROCEEDINGS{10536250,
author={Weissker, Tim and Franzgrote, Matthis and Kuhlen, Torsten and Gerrits, Tim},
booktitle={2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title={On the Computation of User Placements for Virtual Formation Adjustments During Group Navigation},
year={2024},
volume={},
number={},
pages={396-402},
keywords={Three-dimensional displays;Navigation;Conferences;Virtual reality;Human factors;User interfaces;Task analysis;Human-centered computing—Human computer interaction (HCI)—Interaction paradigms—Virtual reality;Human-centered computing—Interaction design—Interaction design theory, concepts and paradigms},
doi={10.1109/VRW62533.2024.00077}}
Try This for Size: Multi-Scale Teleportation in Immersive Virtual Reality
The ability of a user to adjust their own scale while traveling through virtual environments enables them to inspect tiny features being ant-sized and to gain an overview of the surroundings as a giant. While prior work has almost exclusively focused on steering-based interfaces for multi-scale travel, we present three novel teleportation-based techniques that avoid continuous motion flow to reduce the risk of cybersickness. Our approaches build on the extension of known teleportation workflows and suggest specifying scale adjustments either simultaneously with, as a connected second step after, or separately from the user’s new horizontal position. The results of a two-part user study with 30 participants indicate that the simultaneous and connected specification paradigms are both suitable candidates for effective and comfortable multi-scale teleportation with nuanced individual benefits. Scale specification as a separate mode, on the other hand, was considered less beneficial. We compare our findings to prior research and publish the executable of our user study to facilitate replication and further analyses.
» Show BibTeX
@ARTICLE{10458384,
author={Weissker, Tim and Franzgrote, Matthis and Kuhlen, Torsten},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Try This for Size: Multi-Scale Teleportation in Immersive Virtual Reality},
year={2024},
volume={30},
number={5},
pages={2298-2308},
keywords={Teleportation;Navigation;Virtual environments;Three-dimensional displays;Visualization;Cybersickness;Collaboration;Virtual Reality;3D User Interfaces;3D Navigation;Head-Mounted Display;Teleportation;Multi-Scale},
doi={10.1109/TVCG.2024.3372043}}
StudyFramework: Comfortably Setting up and Conducting Factorial-Design Studies Using the Unreal Engine
Setting up and conducting user studies is fundamental to virtual reality research. Yet, often these studies are developed from scratch, which is time-consuming and especially hard and error-prone for novice developers. In this paper, we introduce the StudyFramework, a framework specifically designed to streamline the setup and execution of factorial-design VR-based user studies within the Unreal Engine, significantly enhancing the overall process. We elucidate core concepts such as setup, randomization, the experimenter view, and logging. After utilizing our framework to set up and conduct their respective studies, 11 study developers provided valuable feedback through a structured questionnaire. This feedback, which was generally positive, highlighting its simplicity and usability, is discussed in detail.
» Show BibTeX
@ InProceedings{Ehret2024a,
author={Ehret, Jonathan and Bönsch, Andrea and Fels, Janina and
Schlittmeier, Sabine J. and Kuhlen, Torsten W.},
booktitle={2024 IEEE Conference on Virtual Reality and 3D User Interfaces
Abstracts and Workshops (VRW): Workshop "Open Access Tools and Libraries for
Virtual Reality"},
title={StudyFramework: Comfortably Setting up and Conducting
Factorial-Design Studies Using the Unreal Engine},
year={2024}
}
Audiovisual Coherence: Is Embodiment of Background Noise Sources a Necessity?
Exploring the synergy between visual and acoustic cues in virtual reality (VR) is crucial for elevating user engagement and perceived (social) presence. We present a study exploring the necessity and design impact of background sound source visualizations to guide the design of future soundscapes. To this end, we immersed n = 27 participants using a head-mounted display (HMD) within a virtual seminar room with six virtual peers and a virtual female professor. Participants engaged in a dual-task paradigm involving simultaneously listening to the professor and performing a secondary vibrotactile task, followed by recalling the heard speech content. We compared three types of background sound source visualizations in a within-subject design: no visualization, static visualization, and animated visualization. Participants’ subjective ratings indicate the importance of animated background sound source visualization for an optimal coherent audiovisual representation, particularly when embedding peer-emitted sounds. However, despite this subjective preference, audiovisual coherence did not affect participants’ performance in the dual-task paradigm measuring their listening effort.
» Show BibTeX
@ InProceedings{Ehret2024b,
author={Ehret, Jonathan and Bönsch, Andrea and Schiller, Isabel S. and
Breuer, Carolin and Aspöck, Lukas and Fels, Janina and Schlittmeier, Sabine
J. and Kuhlen, Torsten W.},
booktitle={2024 IEEE Conference on Virtual Reality and 3D User Interfaces
Abstracts and Workshops (VRW): "Workshop on Virtual Humans and Crowds in
Immersive Environments (VHCIE)"},
title={Audiovisual Coherence: Is Embodiment of Background Noise Sources a
Necessity?},
year={2024}
}
Late-Breaking Report: VR-CrowdCraft: Coupling and Advancing Research in Pedestrian Dynamics and Social Virtual Reality
VR-CrowdCraft is a newly formed interdisciplinary initiative, dedicated to the convergence and advancement of two distinct yet interconnected research fields: pedestrian dynamics (PD) and social virtual reality (VR). The initiative aims to establish foundational workflows for a systematic integration of PD data obtained from real-life experiments, encompassing scenarios ranging from smaller clusters of approximately ten individuals to larger groups comprising several hundred pedestrians, into immersive virtual environments (IVEs), addressing the following two crucial goals: (1) Advancing pedestrian dynamic analysis and (2) Advancing virtual pedestrian behavior: authentic populated IVEs and new PD experiments. The LBR presentation will focus on goal 1.
TENETvr: Comprehensible Temporal Teleportation in Time-Varying Virtual Environments
The iterative design process of virtual environments commonly generates a history of revisions that each represent the state of the scene at a different point in time. Browsing through these discrete time points by common temporal navigation interfaces like time sliders, however, can be inaccurate and lead to an uncomfortably high number of visual changes in a short time. In this paper, we therefore present a novel technique called TENETvr (Temporal Exploration and Navigation in virtual Environments via Teleportation) that allows for efficient teleportation-based travel to time points in which a particular object of interest changed. Unlike previous systems, we suggest that changes affecting other objects in the same time span should also be mediated before the teleport to improve predictability. We therefore propose visualizations for nine different types of additions, property changes, and deletions. In a formal user study with 20 participants, we confirmed that this addition leads to significantly more efficient change detection, lower task loads, and higher usability ratings, therefore reducing temporal disorientation.
@INPROCEEDINGS{10316438,
author={Rupp, Daniel and Kuhlen, Torsten and Weissker, Tim},
booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
title={{TENETvr: Comprehensible Temporal Teleportation in Time-Varying Virtual Environments}},
year={2023},
volume={},
number={},
pages={922-929},
doi={10.1109/ISMAR59233.2023.00108}}
Who Did What When? Discovering Complex Historical Interrelations in Immersive Virtual Reality
Traditional digital tools for exploring historical data mostly rely on conventional 2D visualizations, which often cannot reveal all relevant interrelationships between historical fragments (e.g., persons or events). In this paper, we present a novel interactive exploration tool for historical data in VR, which represents fragments as spheres in a 3D environment and arranges them around the user based on their temporal, geo, categorical and semantic similarity. Quantitative and qualitative results from a user study with 29 participants revealed that most participants considered the virtual space and the abstract fragment representation well-suited to explore historical data and to discover complex interrelationships. These results were particularly underlined by high usability scores in terms of attractiveness, stimulation, and novelty, while researching historical facts with our system did not impose unexpectedly high task loads. Additionally, the insights from our post-study interviews provided valuable suggestions for future developments to further expand the possibilities of our system.
@INPROCEEDINGS{10316480,
author={Derksen, Melanie and Becker, Julia and Elahi, Mohammad Fazleh and Maier, Angelika and Maile, Marius and Pätzold, Ingo and Penningroth, Jonas and Reglin, Bettina and Rothgänger, Markus and Cimiano, Philipp and Schubert, Erich and Schwandt, Silke and Kuhlen, Torsten and Botsch, Mario and Weissker, Tim},
booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
title={{Who Did What When? Discovering Complex Historical Interrelations in Immersive Virtual Reality}},
year={2023},
volume={},
number={},
pages={129-137},
doi={10.1109/ISMAR59233.2023.00027}}
Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents
Taking turns in a conversation is a delicate interplay of various signals, which we as humans can easily decipher. Embodied conversational agents (ECAs) communicating with humans should leverage this ability for smooth and enjoyable conversations. Extensive research has analyzed human turn-taking cues, and attempts have been made to predict turn-taking based on observed cues. These cues vary from prosodic, semantic, and syntactic modulation over adapted gesture and gaze behavior to actively used respiration. However, when generating such behavior for social robots or ECAs, often only single modalities were considered, e.g., gazing. We strive to design a comprehensive system that produces cues for all non-verbal modalities: gestures, gaze, and breathing. The system provides valuable cues without requiring speech content adaptation. We evaluated our system in a VR-based user study with N = 32 participants executing two subsequent tasks. First, we asked them to listen to two ECAs taking turns in several conversations. Second, participants engaged in taking turns with one of the ECAs directly. We examined the system’s usability and the perceived social presence of the ECAs' turn-taking behavior, both with respect to each individual non-verbal modality and their interplay. While we found effects of gesture manipulation in interactions with the ECAs, no effects on social presence were found.
This work is licensed under a Creative Commons Attribution International 4.0 License
» Show BibTeX
@InProceedings{Ehret2023,
author = {Jonathan Ehret, Andrea Bönsch, Patrick Nossol, Cosima A. Ermert, Chinthusa Mohanathasan, Sabine J. Schlittmeier, Janina Fels and Torsten W. Kuhlen},
booktitle = {ACM International Conference on Intelligent Virtual Agents (IVA ’23)},
title = {Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents},
year = {2023},
organization = {ACM},
pages = {8},
doi = {10.1145/3570945.3607312},
}
Effect of Head-Mounted Displays on Students’ Acquisition of Surgical Suturing Techniques Compared to an E-Learning and Tutor-Led Course: A Randomized Controlled Trial
Background: Although surgical suturing is one of the most important basic skills, many medical school graduates do not acquire sufficient knowledge of it due to its lack of integration into the curriculum or a shortage of tutors. E-learning approaches attempt to address this issue but still rely on the involvement of tutors. Furthermore, the learning experience and visual-spatial ability appear to play a critical role in surgical skill acquisition. Virtual reality head-mounted displays (HMDs) could address this, but the benefits of immersive and stereoscopic learning of surgical suturing techniques are still unclear.
Material and Methods: In this multi-arm randomized controlled trial, 150 novices participated. Three teaching modalities were compared: an e-learning course (monoscopic), an HMD-based course (stereoscopic, immersive), both self-directed, and a tutor-led course with feedback. Suturing performance was recorded by video camera both before and after course participation (>26 hours of video material) and assessed in a blinded fashion using the OSATS Global Rating Score (GRS). Furthermore, the optical flow of the videos was determined using an algorithm. The number of sutures performed was counted, visual-spatial ability was measured with the mental rotation test (MRT), and courses were assessed with questionnaires.
Results: Students' self-assessment in the HMD-based course was comparable to that of the tutor-led course and significantly better than in the e-learning course (P=0.003). Course suitability was rated best for the tutor-led course (x=4.8), followed by the HMD-based (x=3.6) and e-learning (x=2.5) courses. The median GRS between courses was comparable (P=0.15) at 12.4 (95% CI 10.0–12.7) for the e-learning course, 14.1 (95% CI 13.0-15.0) for the HMD-based course, and 12.7 (95% CI 10.3-14.2) for the tutor-led course. However, the GRS was significantly correlated with the number of sutures performed during the training session (P=0.002), but not with visual-spatial ability (P=0.626). Optical flow (R2=0.15, P<0.001) and the number of sutures performed (R2=0.73, P<0.001) can be used as additional measures to GRS.
Conclusion: The use of HMDs with stereoscopic and immersive video provides advantages in the learning experience and should be preferred over a traditional web application for e-learning. Contrary to expectations, feedback is not necessary for novices to achieve a sufficient level in suturing; only the number of surgical sutures performed during training is a good determinant of competence improvement. Nevertheless, feedback still enhances the learning experience. Therefore, automated assessment as an alternative feedback approach could further improve self-directed learning modalities. As a next step, the data from this study could be used to develop such automated AI-based assessments.
@Article{Peters2023,
author = {Philipp Peters and Martin Lemos and Andrea Bönsch and Mark Ooms and Max Ulbrich and Ashkan Rashad and Felix Krause and Myriam Lipprandt and Torsten Wolfgang Kuhlen and Rainer Röhrig and Frank Hölzle and Behrus Puladi},
journal = {International Journal of Surgery},
title = {Effect of head-mounted displays on students' acquisition of surgical suturing techniques compared to an e-learning and tutor-led course: A randomized controlled trial},
year = {2023},
month = {may},
volume = {Publish Ahead of Print},
creationdate = {2023-05-12T11:00:37},
doi = {10.1097/js9.0000000000000464},
modificationdate = {2023-05-12T11:00:37},
publisher = {Ovid Technologies (Wolters Kluwer Health)},
}
Voice Quality and its Effects on University Students' Listening Effort in a Virtual Seminar Room
A teacher’s poor voice quality may increase listening effort in pupils, but it is unclear whether this effect persists in adult listeners. Thus, the goal of this study is to examine the impact of vocal hoarseness on university students' listening effort in a virtual seminar room. An audio-visual immersive virtual reality environment is utilized to simulate a typical seminar room with common background sounds and fellow students represented as wooden mannequins. Participants wear a head-mounted display and are equipped with two controllers to engage in a dual-task paradigm. The primary task is to listen to a virtual professor reading short texts and retain relevant content information to be recalled later. The texts are presented either in a normal or an imitated hoarse voice. In parallel, participants perform a secondary task which is responding to tactile vibration patterns via the controllers. It is hypothesized that listening to the hoarse voice induces listening effort, resulting in more cognitive resources needed for primary task performance while secondary task performance is hindered. Results are presented and discussed in light of students’ cognitive performance and listening challenges in higher education learning environments.
@INPROCEEDINGS{Schiller:977871,
author = {Schiller, Isabel Sarah and Aspöck, Lukas and Breuer,
Carolin and Ehret, Jonathan and Bönsch, Andrea and Fels,
Janina and Kuhlen, Torsten and Schlittmeier, Sabine Janina},
title = {{V}oice Quality and its Effects on University
Students' Listening Effort in a Virtual Seminar Room},
year = {2023},
month = {Dec},
date = {2023-12-04},
organization = {Acoustics 2023, Sydney (Australia), 4
Dec 2023 - 8 Dec 2023},
doi = {10.1121/10.0022982}
}
A Case Study on Providing Accessibility-Focused In-Transit Architectures for Neural Network Simulation and Analysis
Due to the ever-increasing availability of high-performance computing infrastructure, developers can simulate increasingly complex models. However, the increased complexity comes with new challenges regarding data processing and visualization due to the sheer size of simulations. Exploring simulation results needs to be handled efficiently via in-situ/in-transit analysis during run-time. However, most existing in-transit solutions require sophisticated and prior knowledge and significant alteration to existing simulation and visualization code, which produces a high entry barrier for many projects. In this work, we report how Insite, a lightweight in-transit pipeline, provided in-transit visualization and computation capability to various research applications in the neuronal network simulation domain. We describe the development process, including feedback from developers and domain experts, and discuss implications.
@inproceedings{kruger2023case,
title={A Case Study on Providing Accessibility-Focused In-Transit Architectures for Neural Network Simulation and Analysis},
author={Kr{\"u}ger, Marcel and Oehrl, Simon and Kuhlen, Torsten Wolfgang and Gerrits, Tim},
booktitle={International Conference on High Performance Computing},
pages={277--287},
year={2023},
organization={Springer}
}
Towards Plausible Cognitive Research in Virtual Environments: The Effect of Audiovisual Cues on Short-Term Memory in Two Talker Conversations
When three or more people are involved in a conversation, often one conversational partner listens to what the others are saying and has to remember the conversational content. The setups in cognitive-psychological experiments often differ substantially from everyday listening situations by neglecting such audiovisual cues. The presence of speech-related audiovisual cues, such as the spatial position, and the appearance or non-verbal behavior of the conversing talkers may influence the listener's memory and comprehension of conversational content. In our project, we provide first insights into the contribution of acoustic and visual cues on short-term memory, and (social) presence. Analyses have shown that the memory performance varies with increasingly more plausible audiovisual characteristics. Furthermore, we have conducted a series of experiments regarding the influence of the visual reproduction medium (virtual reality vs. traditional computer screens) and spatial or content audio-visual mismatch on auditory short-term memory performance. Adding virtual embodiments to the talkers allowed us to conduct experiments on the influence of the fidelity of co-verbal gestures and turn-taking signals. Thus, we are able to provide a more plausible paradigm for investigating memory for two-talker conversations within an interactive audiovisual virtual reality environment.
@InProceedings{Ehret2023Audictive,
author = {Jonathan Ehret, Cosima A. Ermert, Chinthusa
Mohanathasan, Janina Fels, Torsten W. Kuhlen and Sabine J. Schlittmeier},
booktitle = {Proceedings of the 1st AUDICTIVE Conference},
title = {Towards Plausible Cognitive Research in Virtual
Environments: The Effect of Audiovisual Cues on Short-Term Memory in
Two-Talker Conversations},
year = {2023},
pages = {68-72},
doi = { 10.18154/RWTH-2023-08409},
}
DasherVR: Evaluating a Predictive Text Entry System in Immersive Virtual Reality
Inputting text fluently in virtual reality is a topic still under active research, since many previously presented solutions have drawbacks in either speed, error rate, privacy or accessibility. To address these drawbacks, in this paper we adapted the predictive text entry system "Dasher" into an immersive virtual environment. Our evaluation with 20 participants shows that Dasher offers a good user experience with input speeds similar to other virtual text input techniques in the literature while maintaining low error rates. In combination with positive user feedback, we therefore believe that DasherVR is a promising basis for further research on accessible text input in immersive virtual reality.
» Show BibTeX
@inproceedings{pape2023,
title = {{{DasherVR}}: {{Evaluating}} a {{Predictive Text Entry System}} in {{Immersive Virtual Reality}}},
booktitle = {Towards an {{Inclusive}} and {{Accessible Metaverse}} at {{CHI}}'23},
author = {Pape, Sebastian and Ackermann, Jan Jakub and Weissker, Tim and Kuhlen, Torsten W},
doi = {https://doi.org/10.18154/RWTH-2023-05093},
year = {2023}
}
A Case Study on Providing Immersive Visualization for Neuronal Network Data Using COTS Soft- and Hardware
COTS VR hardware and modern game engines create the impression that bringing even complex data into VR has become easy. In this work, we investigate to what extent game engines can support the development of immersive visualization software with a case study. We discuss how the engine can support the development and where it falls short, e.g., failing to provide acceptable rendering performance for medium and large-sized data sets without using more sophisticated features.
@INPROCEEDINGS{10108843,
author={Krüger, Marcel and Li, Qin and Kuhlen, Torsten W. and Gerrits, Tim},
booktitle={2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title={A Case Study on Providing Immersive Visualization for Neuronal Network Data Using COTS Soft- and Hardware},
year={2023},
volume={},
number={},
pages={201-205},
doi={10.1109/VRW58643.2023.00050}}
Enhanced Auditoriums for Attending Talks in Social Virtual Reality
Replicating traditional auditorium layouts for attending talks in social virtual reality often results in poor visibility of the presentation and a reduced feeling of being there together with others. Motivated by the use case of academic conferences, we therefore propose to display miniature representations of the stage close to the viewers for enhanced presentation visibility as well as group table arrangements for enhanced social co-watching. We conducted an initial user study with 12 participants in groups of three to evaluate the influence of these ideas on audience experience. Our results confirm the hypothesized positive effects of both enhancements and show that their combination was particularly appreciated by audience members. Our results therefore strongly encourage us to rethink conventional auditorium layouts in social virtual reality.
@inproceedings{10.1145/3544549.3585718,
author = {Weissker, Tim and Pieters, Leander and Kuhlen, Torsten},
title = {Enhanced Auditoriums for Attending Talks in Social Virtual Reality},
year = {2023},
isbn = {9781450394222},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544549.3585718},
doi = {10.1145/3544549.3585718},
abstract = {Replicating traditional auditorium layouts for attending talks in social virtual reality often results in poor visibility of the presentation and a reduced feeling of being there together with others. Motivated by the use case of academic conferences, we therefore propose to display miniature representations of the stage close to the viewers for enhanced presentation visibility as well as group table arrangements for enhanced social co-watching. We conducted an initial user study with 12 participants in groups of three to evaluate the influence of these ideas on audience experience. Our results confirm the hypothesized positive effects of both enhancements and show that their combination was particularly appreciated by audience members. Our results therefore strongly encourage us to rethink conventional auditorium layouts in social virtual reality.},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
articleno = {101},
numpages = {7},
keywords = {Audience Experience, Head-Mounted Display, Multi-User, Social Interaction, Virtual Presentations, Virtual Reality},
location = {<conf-loc>, <city>Hamburg</city>, <country>Germany</country>, </conf-loc>},
series = {CHI EA '23}
}
Advantages of a Training Course for Surgical Planning in Virtual Reality in Oral and Maxillofacial Surgery
Background: As an integral part of computer-assisted surgery, virtual surgical planning(VSP) leads to significantly better surgery results, such as for oral and maxillofacial reconstruction with microvascular grafts of the fibula or iliac crest. It is performed on a 2D computer desktop (DS) based on preoperative medical imaging. However, in this environment, VSP is associated with shortcomings, such as a time-consuming planning process and the requirement of a learning process. Therefore, a virtual reality VR)-based VSP application has great potential to reduce or even overcome these shortcomings due to the benefits of visuospatial vision, bimanual interaction, and full immersion. However, the efficacy of such a VR environment has not yet been investigated.
Objective: Does VR offer advantages in learning process and working speed while providing similar good results compared to a traditional DS working environment?
Methods: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis) for the segmentation of fibulae and os coxae (n = 156), and they were askedto carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared usingobjective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score).
Results: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis)for the segmentation of fibulae and os coxae (n = 156), and they were asked to carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared using objective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score).
Conclusions: The more rapid learning process and the ability to work faster in the VR environment could save time and reduce the VSP workload, providing certain advantages over the DS environment.
@article{Ulbrich2022,
title={Advantages of a Training Course for Surgical Planning in Virtual
Reality in Oral and Maxillofacial Surgery },
author={ Ulbrich, M., Van den Bosch, V., Bönsch, A., Gruber, L.J., Ooms,
M., Melchior, C., Motmaen, I., Wilpert, C., Rashad, A., Kuhlen, T.W.,
Hölzle, F., Puladi, B.},
journal={JMIR Serious Games},
volume={ 28/11/2022:40541 (forthcoming/in press) },
year={2022},
publisher={JMIR Publications Inc., Toronto, Canada}
}
Poster: Memory and Listening Effort in Two-Talker Conversations: Does Face Visibility Help Us Remember?
Listening to and remembering conversational content is a highly demanding task that requires the interplay of auditory processes and several cognitive functions. In face-to-face conversations, it is quite impossible that two talker’s’ audio signals originate from the same spatial position and that their faces are hidden from view. The availability of such audiovisual cues when listening potentially influences memory and comprehension of the heard content. In the present study, we investigated the effect of static visual faces of two talkers and cognitive functions on the listener’s short-term memory of conversations and listening effort. Participants performed a dual-task paradigm including a primary listening task, where a conversation between two spatially separated talkers (+/- 60°) with static faces was presented. In parallel, a vibrotactile task was administered, independently of both visual and auditory modalities. To investigate the possibility of person-specific factors influencing short-term memory, we assessed additional cognitive functions like working memory. We discuss our results in terms of the role that visual information and cognitive functions play in short-term memory of conversations.
@InProceedings{ Mohanathasan2023ESCoP,
author = { Chinthusa Mohanathasan, Jonathan Ehret, Cosima A. Ermert, Janina Fels, Torsten Wolfgang Kuhlen and Sabine J. Schlittmeier},
booktitle = { 23. Conference of the European Society for Cognitive Psychology , Porto , Portugal , ESCoP 2023},
title = { Memory and Listening Effort in Two-Talker Conversations: Does Face Visibility Help Us Remember?},
year = {2023},
}
Towards More Realistic Listening Research in Virtual Environments: The Effect of Spatial Separation of Two Talkers in Conversations on Memory and Listening Effort
Conversations between three or more people often include phases in which one conversational partner is the listener while the others are conversing. In face-to-face conversations, it is quite unlikely to have two talkers’ audio signals come from the same spatial location - yet monaural-diotic sound presentation is often realized in cognitive-psychological experiments. However, the availability of spatial cues probably influences the cognitive processing of heard conversational content. In the present study we test this assumption by investigating spatial separation of conversing talkers in the listener’s short-term memory and listening effort. To this end, participants were administered a dual-task paradigm. In the primary task, participants listened to a conversation between two alternating talkers in a non-noisy setting and answered questions on the conversational content after listening. The talkers’ audio signals were presented at a distance of 2.5m from the listener either spatially separated (+/- 60°) or co-located (0°; within-subject). As a secondary task, participants worked in parallel to the listening task on a vibrotactile stimulation task, which is detached from auditory and visual modalities. The results are reported and discussed in particular regarding future listening experiments in virtual environments.
@InProceedings{Mohanathasan2023DAGA,
author = {Chinthusa Mohanathasan, Jonathan Ehret, Cosima A.
Ermert, Janina Fels, Torsten Wolfgang Kuhlen and Sabine J. Schlittmeier},
booktitle = {49. Jahrestagung für Akustik , Hamburg , Germany ,
DAGA 2023},
title = {Towards More Realistic Listening Research in Virtual
Environments: The Effect of Spatial Separation of Two Talkers in
Conversations on Memory and Listening Effort},
year = {2023},
pages = {1425-1428},
doi = { 10.18154/RWTH-2023-05116},
}
Towards Discovering Meaningful Historical Relationships in Virtual Reality
Traditional digital tools for exploring historical data mostly rely on conventional 2D visualizations, which often cannot reveal all relevant interrelationships between historical fragments. We are working on a novel interactive exploration tool for historical data in virtual reality, which arranges fragments in a 3D environment based on their temporal, spatial and categorical proximity to a reference fragment. In this poster, we report on an initial expert review of our approach, giving us valuable insights into the use cases and requirements that inform our further developments.
@INPROCEEDINGS{Derksen2023,
author={Derksen, Melanie and Weissker, Tim and Kuhlen, Torsten and Botsch, Mario},
booktitle={2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title={Towards Discovering Meaningful Historical Relationships in Virtual Reality},
year={2023},
volume={},
number={},
pages={697-698},
doi={10.1109/VRW58643.2023.00191}}
Gaining the High Ground: Teleportation to Mid-Air Targets in Immersive Virtual Environments
Most prior teleportation techniques in virtual reality are bound to target positions in the vicinity of selectable scene objects. In this paper, we present three adaptations of the classic teleportation metaphor that enable the user to travel to mid-air targets as well. Inspired by related work on the combination of teleports with virtual rotations, our three techniques differ in the extent to which elevation changes are integrated into the conventional target selection process. Elevation can be specified either simultaneously, as a connected second step, or separately from horizontal movements. A user study with 30 participants indicated a trade-off between the simultaneous method leading to the highest accuracy and the two-step method inducing the lowest task load as well as receiving the highest usability ratings. The separate method was least suitable on its own but could serve as a complement to one of the other approaches. Based on these findings and previous research, we define initial design guidelines for mid-air navigation techniques.
@ARTICLE{10049698,
author={Weissker, Tim and Bimberg, Pauline and Gokhale, Aalok Shashidhar and Kuhlen, Torsten and Froehlich, Bernd},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Gaining the High Ground: Teleportation to Mid-Air Targets in Immersive Virtual Environments},
year={2023},
volume={29},
number={5},
pages={2467-2477},
keywords={Teleportation;Navigation;Avatars;Visualization;Task analysis;Floors;Virtual environments;Virtual Reality;3D User Interfaces;3D Navigation;Head-Mounted Display;Teleportation;Flying;Mid-Air Navigation},
doi={10.1109/TVCG.2023.3247114}}
Poster: Enhancing Proxy Localization in World in Miniatures Focusing on Virtual Agents
Virtual agents (VAs) are increasingly utilized in large-scale architectural immersive virtual environments (LAIVEs) to enhance user engagement and presence. However, challenges persist in effectively localizing these VAs for user interactions and optimally orchestrating them for an interactive experience. To address these issues, we propose to extend world in miniatures (WIMs) through different localization and manipulation techniques as these 3D miniature scene replicas embedded within LAIVEs have already demonstrated effectiveness for wayfinding, navigation, and object manipulation. The contribution of our ongoing research is thus the enhancement of manipulation and localization capabilities within WIMs, focusing on the use case of VAs.
@InProceedings{Boensch2023c,
author = {Andrea Bönsch, Radu-Andrei Coanda, and Torsten W.
Kuhlen},
booktitle = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 14.
{W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
title = {Enhancing Proxy Localization in World in
Miniatures Focusing on Virtual Agents},
year = {2023},
organization = {Gesellschaft für Informatik e.V.},
doi = {10.18420/vrar2023_3381}
}
Poster: Whom Do You Follow? Pedestrian Flows Constraining the User’s Navigation during Scene Exploration
In this work-in-progress, we strive to combine two wayfinding techniques supporting users in gaining scene knowledge, namely (i) the River Analogy, in which users are considered as boats automatically floating down predefined rivers, e.g., streets in an urban scene, and (ii) virtual pedestrian flows as social cues indirectly guiding users through the scene. In our combined approach, the pedestrian flows function as rivers. To navigate through the scene, users leash themselves to a pedestrian of choice, considered as boat, and are dragged along the flow towards an area of interest. Upon arrival, users can detach themselves to freely explore the site without navigational constraints. We briefly outline our approach, and discuss the results of an initial study focusing on various leashing visualizations.
@InProceedings{Boensch2023b,
author = {Andrea Bönsch, Lukas B. Zimmermann, Jonathan Ehret, and Torsten W.Kuhlen},
booktitle = {ACM International Conferenceon Intelligent Virtual Agents (IVA ’23)},
title = {Whom Do You Follow? Pedestrian Flows Constraining the User’sNavigation during Scene Exploration},
year = {2023},
organization = {ACM},
pages = {3},
doi = {10.1145/3570945.3607350},
}
Poster: Where Do They Go? Overhearing Conversing Pedestrian Groups during Scene Exploration
On entering an unknown immersive virtual environment, a user’s first task is gaining knowledge about the respective scene, termed scene exploration. While many techniques for aided scene exploration exist, such as virtual guides, or maps, unaided wayfinding through pedestrians-as-cues is still in its infancy. We contribute to this research by indirectly guiding users through pedestrian groups conversing about their target location. A user who overhears the conversation without being a direct addressee can consciously decide whether to follow the group to reach an unseen point of interest. We outline our approach and give insights into the results of a first feasibility study in which we compared our new approach to non-talkative groups and groups conversing about random topics.
@InProceedings{Boensch2023a,
author = {Andrea Bönsch, Till Sittart, Jonathan Ehret, and Torsten W. Kuhlen},
booktitle = {ACM International Conference on Intelligent VirtualAgents (IVA ’23)},
title = {Where Do They Go? Overhearing Conversing Pedestrian Groups duringScene Exploration},
year = {2023},
pages = {3},
publisher = {ACM},
doi = {10.1145/3570945.3607351},
}
AuViST - An Audio-Visual Speech and Text Database for the Heard-Text-Recall Paradigm
The Audio-Visual Speech and Text (AuViST) database provides additional material to the heardtext-recall (HTR) paradigm by Schlittmeier and colleagues. German audio recordings in male and female voice as well as matching face tracking data are provided for all texts.
Poster: Memory and Listening Effort in Conversations: The Role of Spatial Cues and Cognitive Functions
Conversations involving three or more people often include phases where one conversational partner listens to what the others are saying and has to remember the conversational content. It is possible that the presence of speech-related auditory information, such as different spatial positions of conversing talkers, influences listener's memory and comprehension of conversational content. However, in cognitive-psychological experiments, talkers’ audio signals are often presented diotically, i.e., identically to both ears as mono signals. This does not reflect face-to-face conversations where two talkers’ audio signals never come from the same spatial location. Therefore, in the present study, we examine how the spatial separation of two conversing talkers affects listener’s short-term memory of heard information and listening effort. To accomplish this, participants were administered a dual-task paradigm. In the primary task, participants listened to a conversation between a female and a male talker and then responded to content-related questions. The talkers’ audio signals were presented via headphones at a distance of 2.5m from the listener either spatially separated (+/- 60°) or co-located (0°). In parallel to this listening task, participants performed a vibrotactile pattern recognition task as a secondary task, that is independent of both auditory and visual modalities. In addition, we measured participants’ working memory capacity, selective visual attention, and mental speed to control for listener-specific characteristics that may affect listener’s memory performance. We discuss the extent to which spatial cues affect higher-level auditory cognition, specifically short-term memory of conversational content.
@InProceedings{ Mohanathasan2023TeaP,
author = { Chinthusa Mohanathasan, Jonathan Ehret, Cosima A.
Ermert, Janina Fels, Torsten Wolfgang Kuhlen and Sabine J. Schlittmeier},
booktitle = { Abstracts of the 65th TeaP : Tagung experimentell
arbeitender Psycholog:innen, Conference of Experimental Psychologists},
title = { Memory and Listening Effort in Conversations: The
Role of Spatial Cues and Cognitive Functions},
year = {2023},
pages = {252-252},
}
Audio-Visual Content Mismatches in the Serial Recall Paradigm
In many everyday scenarios, short-term memory is crucial for human interaction, e.g., when remembering a shopping list or following a conversation. A well-established paradigm to investigate short-term memory performance is the serial recall. Here, participants are presented with a list of digits in random order and are asked to memorize the order in which the digits were presented. So far, research in cognitive psychology has mostly focused on the effect of auditory distractors on the recall of visually presented items. The influence of visual distractors on the recall of auditory items has mostly been ignored. In the scope of this talk, we designed an audio-visual serial recall task. Along with the auditory presentation of the to-remembered digits, participants saw the face of a virtual human, moving the lips according to the spoken words. However, the gender of the face did not always match the gender of the voice heard, hence introducing an audio-visual content mismatch. The results give further insights into the interplay of visual and auditory stimuli in serial recall experiments.
@InProceedings{Ermert2023DAGA,
author = {Cosima A. Ermert, Jonathan Ehret, Torsten Wolfgang
Kuhlen, Chinthusa Mohanathasan, Sabine J. Schlittmeier and Janina Fels},
booktitle = {49. Jahrestagung für Akustik , Hamburg , Germany ,
DAGA 2023},
title = {Audio-Visual Content Mismatches in the Serial Recall
Paradigm},
year = {2023},
pages = {1429-1430},
}
Poster: Insite Pipeline - A Pipeline Enabling In-Transit Processing for Arbor, NEST and TVB
Simulation of neuronal networks has steadily advanced and now allows for larger and more complex models. However, scaling simulations to such sizes comes with issues and challenges.Especially the amount of data produced, as well as the runtime of the simulation, can be limiting.Often, storing all data on disk is impossible, and users might have to wait for a long time until they can process the data.A standard solution in simulation science is to use in-transit approaches.In-transit implementations allow users to access data while the simulation is still running and do parallel processing outside the simulation.This allows for early insights into the results, early stopping of simulations that are not promising, or even steering of the simulations.Existing in-transit solutions, however, are often complex to integrate into the workflow as they rely on integration into simulators and often use data formats that are complex to handle.This is especially constraining in the context of multi-disciplinary research conducted in the HBP, as such an important feature should be accessible to all users.
To remedy this, we developed Insite, a pipeline that allows easy in-transit access to simulation data of multiscale simulations conducted with TVB, NEST, and Arbor.
@misc{kruger_marcel_2023_7849225,
author = {Krüger, Marcel and
Gerrits, Tim and
Kuhlen, Torsten and
Weyers, Benjamin},
title = {{Insite Pipeline - A Pipeline Enabling In-Transit
Processing for Arbor, NEST and TVB}},
month = mar,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7849225},
url = {https://doi.org/10.5281/zenodo.7849225}
}
Insite: A Pipeline Enabling In-Transit Visualization and Analysis for Neuronal Network Simulations
Neuronal network simulators are central to computational neuroscience, enabling the study of the nervous system through in-silico experiments. Through the utilization of high-performance computing resources, these simulators are capable of simulating increasingly complex and large networks of neurons today. Yet, the increased capabilities introduce a challenge to the analysis and visualization of the simulation results. In this work, we propose a pipeline for in-transit analysis and visualization of data produced by neuronal network simulators. The pipeline is able to couple with simulators, enabling querying, filtering, and merging data from multiple simulation instances. Additionally, the architecture allows user-defined plugins that perform analysis tasks in the pipeline. The pipeline applies traditional REST API paradigms and utilizes data formats such as JSON to provide easy access to the generated data for visualization and further processing. We present and assess the proposed architecture in the context of neuronal network simulations generated by the NEST simulator.
@InProceedings{10.1007/978-3-031-23220-6_20,
author="Kr{\"u}ger, Marcel and Oehrl, Simon and Demiralp, Ali C. and Spreizer, Sebastian and Bruchertseifer, Jens and Kuhlen, Torsten W. and Gerrits, Tim and Weyers, Benjamin",
editor="Anzt, Hartwig and Bienz, Amanda and Luszczek, Piotr and Baboulin, Marc",
title="Insite: A Pipeline Enabling In-Transit Visualization and Analysis for Neuronal Network Simulations",
booktitle="High Performance Computing. ISC High Performance 2022 International Workshops",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="295--305",
isbn="978-3-031-23220-6"
}
Performance Assessment of Diffusive Load Balancing for Distributed Particle Advection
Particle advection is the approach for the extraction of integral curves from vector fields. Efficient parallelization of particle advection is a challenging task due to the problem of load imbalance, in which processes are assigned unequal workloads, causing some of them to idle as the others are performing computing. Various approaches to load balancing exist, yet they all involve trade-offs such as increased inter-process communication, or the need for central control structures. In this work, we present two local load balancing methods for particle advection based on the family of diffusive load balancing. Each process has access to the blocks of its neighboring processes, which enables dynamic sharing of the particles based on a metric defined by the workload of the neighborhood. The approaches are assessed in terms of strong and weak scaling as well as load imbalance. We show that the methods reduce the total run-time of advection and are promising with regard to scaling as they operate locally on isolated process neighborhoods.
Astray: A Performance-Portable Geodesic Ray Tracer
Geodesic ray tracing is the numerical method to compute the motion of matter and radiation in spacetime. It enables visualization of the geometry of spacetime and is an important tool to study the gravitational fields in the presence of astrophysical phenomena such as black holes. Although the method is largely established, solving the geodesic equation remains a computationally demanding task. In this work, we present Astray; a high-performance geodesic ray tracing library capable of running on a single or a cluster of computers equipped with compute or graphics processing units. The library is able to visualize any spacetime given its metric tensor and contains optimized implementations of a wide range of spacetimes, including commonly studied ones such as Schwarzschild and Kerr. The performance of the library is evaluated on standard consumer hardware as well as a compute cluster through strong and weak scaling benchmarks. The results indicate that the system is capable of reaching interactive frame rates with increasing use of high-performance computing resources. We further introduce a user interface capable of remote rendering on a cluster for interactive visualization of spacetimes.
@inproceedings {10.2312:vmv.20221208,
booktitle = {Vision, Modeling, and Visualization},
editor = {Bender, Jan and Botsch, Mario and Keim, Daniel A.},
title = {{Astray: A Performance-Portable Geodesic Ray Tracer}},
author = {Demiralp, Ali Can and Krüger, Marcel and Chao, Chu and Kuhlen, Torsten W. and Gerrits, Tim},
year = {2022},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-189-2},
DOI = {10.2312/vmv.20221208}
}
Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study
Background: Although nearly one-third of the world’s disease burden requires surgical care, only a small proportion of digital health applications are directly used in the surgical field. In the coming decades, the application of augmented reality (AR) with a new generation of optical-see-through head-mounted displays (OST-HMDs) like the HoloLens (Microsoft Corp) has the potential to bring digital health into the surgical field. However, for the application to be performed on a living person, proof of performance must first be provided due to regulatory requirements. In this regard, cadaver studies could provide initial evidence.
Objective: The goal of the research was to develop an open-source system for AR-based surgery on human cadavers using freely available technologies.
Methods: We tested our system using an easy-to-understand scenario in which fractured zygomatic arches of the face had to be repositioned with visual and auditory feedback to the investigators using a HoloLens. Results were verified with postoperative imaging and assessed in a blinded fashion by 2 investigators. The developed system and scenario were qualitatively evaluated by consensus interview and individual questionnaires.
Results: The development and implementation of our system was feasible and could be realized in the course of a cadaver study. The AR system was found helpful by the investigators for spatial perception in addition to the combination of visual as well as auditory feedback. The surgical end point could be determined metrically as well as by assessment.
Conclusions: The development and application of an AR-based surgical system using freely available technologies to perform OST-HMD–guided surgical procedures in cadavers is feasible. Cadaver studies are suitable for OST-HMD–guided interventions to measure a surgical end point and provide an initial data foundation for future clinical trials. The availability of free systems for researchers could be helpful for a possible translation process from digital health to AR-based surgery using OST-HMDs in the operating theater via cadaver studies.
@article{puladi2022augmented,
title={Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study},
author={Puladi, Behrus and Ooms, Mark and Bellgardt, Martin and Cesov, Mark and Lipprandt, Myriam and Raith, Stefan and Peters, Florian and M{\"o}hlhenrich, Stephan Christian and Prescher, Andreas and H{\"o}lzle, Frank and others},
journal={JMIR Serious Games},
volume={10},
number={2},
pages={e34781},
year={2022},
publisher={JMIR Publications Inc., Toronto, Canada}
}
Poster: Measuring Listening Effort in Adverse Listening Conditions: Testing Two Dual Task Paradigms for Upcoming Audiovisual Virtual Reality Experiments
Listening to and remembering the content of conversations is a highly demanding task from a cognitive-psychological perspective. Particularly, in adverse listening conditions cognitive resources available for higher-level processing of speech are reduced since increased listening effort consumes more of the overall available cognitive resources. Applying audiovisual Virtual Reality (VR) environments to listening research could be highly beneficial for exploring cognitive performance for overheard content. In this study, we therefore evaluated two (secondary) tasks concerning their suitability for measuring cognitive spare capacity as an indicator of listening effort in audiovisual VR environments. In two experiments, participants were administered a dual-task paradigm including a listening (primary) task in which a conversation between two talkers is presented, and an unrelated secondary task each. Both experiments were carried out without additional background noise and under continuous noise. We discuss our results in terms of guidance for future experimental studies, especially in audiovisual VR environments.
@InProceedings{ Mohanathasan2022ESCoP,
author = { Chinthusa Mohanathasan, Jonathan Ehret, Cosima A.
Ermert, Janina Fels, Torsten Wolfgang Kuhlen and Sabine J. Schlittmeier},
booktitle = { 22. Conference of the European Society for Cognitive
Psychology , Lille , France , ESCoP},
title = { Measuring Listening Effort in Adverse Listening
Conditions: Testing Two Dual Task Paradigms for Upcoming Audiovisual Virtual
Reality Experiments},
year = {2022},
}
The aixCAVE at RWTH Aachen University
At a large technical university like RWTH Aachen, there is enormous potential to use VR as a tool in research. In contrast to applications from the entertainment sector, many scientific application scenarios - for example, a 3D analysis of result data from simulated flows - not only depend on a high degree of immersion, but also on a high resolution and excellent image quality of the display. In addition, the visual analysis of scientific data is often carried out and discussed in smaller teams. For these reasons, but also for simple ergonomic aspects (comfort, cybersickness), many technical and scientific VR applications cannot just be implemented on the basis of head-mounted displays. To this day, in VR Labs of universities and research institutions, it is therefore desirable to install immersive large-screen rear projection systems (CAVEs) in order to adequately support the scientists. Due to the high investment costs, such systems are used at larger universities such as Aachen, Cologne, Munich, or Stuttgart, often operated by the computing centers as a central infrastructure accessible to all scientists at the university.
Late-Breaking Report: Natural Turn-Taking with Embodied Conversational Agents
Adding embodied conversational agents (ECAs) to immersive virtual environments (IVEs) becomes relevant in various application scenarios, for example, conversational systems. For successful interactions with these ECAs, they have to behave naturally, i.e. in the way a user would expect a real human to behave. Teaming up with acousticians and psychologists, we strive to explore turn-taking in VR-based interactions between either two ECAs or an ECA and a human user.
Late-Breaking Report: An Embodied Conversational Agent Supporting Scene Exploration by Switching between Guiding and Accompanying
In this late-breaking report, we first motivate the requirement of an embodied conversational agent (ECA) who combines characteristics of a virtual tour guide and a knowledgeable companion in order to allow users an interactive and adaptable, however, structured exploration of an unknown immersive, architectural environment. Second, we roughly outline our proposed ECA’s behavioral design followed by a teaser on the planned user study.
Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents' Speech?
presented at ACM Symposium on Applied Perception (SAP)
For conversational agents’ speech, all possible sentences have to be either prerecorded by voice actors or the required utterances can be synthesized. While synthesizing speech is more flexible and economic in production, it also potentially reduces the perceived naturalness of the agents amongst others due to mistakes at various linguistic levels. In our paper, we are interested in the impact of adequate and inadequate prosody, here particularly in terms of accent placement, on the perceived naturalness and aliveness of the agents. We compare (i) inadequate prosody, as generated by off-the-shelf text-to-speech (TTS) engines with synthetic output, (ii) the same inadequate prosody imitated by trained human speakers and (iii) adequate prosody produced by those speakers. The speech was presented either as audio-only or by embodied, anthropomorphic agents, to investigate the potential masking effect by a simultaneous visual representation of those virtual agents. To this end, we conducted an online study with 40 participants listening to four different dialogues each presented in the three Speech levels and the two Embodiment levels. Results confirmed that adequate prosody in human speech is perceived as more natural (and the agents are perceived as more alive) than inadequate prosody in both human (ii) and synthetic speech (i). Thus, it is not sufficient to just use a human voice for an agent’s speech to be perceived as natural - it is decisive whether the prosodic realisation is adequate or not. Furthermore, and surprisingly, we found no masking effect by speaker embodiment, since neither a human voice with inadequate prosody nor a synthetic voice was judged as more natural, when a virtual agent was visible compared to the audio-only condition. On the contrary, the human voice was even judged as less “alive” when accompanied by a virtual agent. In sum, our results emphasize on the one hand the importance of adequate prosody for perceived naturalness, especially in terms of accents being placed on important words in the phrase, while showing on the other hand that the embodiment of virtual agents plays a minor role in naturalness ratings of voices.
» Show BibTeX
@article{Ehret2021a,
author = {Ehret, Jonathan and B\"{o}nsch, Andrea and Asp\"{o}ck, Lukas and R\"{o}hr, Christine T. and Baumann, Stefan and Grice, Martine and Fels, Janina and Kuhlen, Torsten W.},
title = {Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents’ Speech?},
journal = {ACM transactions on applied perception},
year = {2021},
issue_date = {October 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {18},
number = {4},
articleno = {21},
issn = {1544-3558},
url = {https://doi.org/10.1145/3486580},
doi = {10.1145/3486580},
numpages = {15},
keywords = {speech, audio, accentuation, prosody, text-to-speech, Embodied conversational agents (ECAs), virtual acoustics, embodiment}
}
Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent’s Behavior in a Museum
A virtual guide in an immersive virtual environment allows users a structured experience without missing critical information. However, although being in an interactive medium, the user is only a passive listener, while the embodied conversational agent (ECA) fulfills the active roles of wayfinding and conveying knowledge. Thus, we investigated for the use case of a virtual museum, whether users prefer a virtual guide or a free exploration accompanied by an ECA who imparts the same information compared to the guide. Results of a small within-subjects study with a head-mounted display are given and discussed, resulting in the idea of combining benefits of both conditions for a higher user acceptance. Furthermore, the study indicated the feasibility of the carefully designed scene and ECA’s appearance.
We also submitted a GALA video entitled "An Introduction to the World of Internet Memes by Curator Kate: Guiding or Accompanying Visitors?" by D. Hashem, A. Bönsch, J. Ehret, and T.W. Kuhlen, showcasing our application.
IVA 2021 GALA Audience Award!
» Show BibTeX
@inproceedings{Boensch2021b,
author = {B\"{o}nsch, Andrea and Hashem, David and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {{Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent's Behavior in a Museum}},
year = {2021},
isbn = {9781450386197},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3472306.3478339},
doi = {10.1145/3472306.3478339},
booktitle = {{Proceedings of the 21th ACM International Conference on Intelligent Virtual Agents}},
pages = {33–40},
numpages = {8},
keywords = {virtual agents, enjoyment, guiding, virtual reality, free exploration, museum, embodied conversational agents},
location = {Virtual Event, Japan},
series = {IVA '21}
}
Design and Evaluation of a Free-Hand VR-based Authoring Environment for Automated Vehicle Testing
Virtual Reality is increasingly used for safe evaluation and validation of autonomous vehicles by automotive engineers. However, the design and creation of virtual testing environments is a cumbersome process. Engineers are bound to utilize desktop-based authoring tools, and a high level of expertise is necessary. By performing scene authoring entirely inside VR, faster design iterations become possible. To this end, we propose a VR authoring environment that enables engineers to design road networks and traffic scenarios for automated vehicle testing based on free-hand interaction. We present a 3D interaction technique for the efficient placement and selection of virtual objects that is employed on a 2D panel. We conducted a comparative user study in which our interaction technique outperformed existing approaches regarding precision and task completion time. Furthermore, we demonstrate the effectiveness of the system by a qualitative user study with domain experts.
Nominated for the Best Paper Award.
Poster: Indircet User Guidance by Pedestrians in Virtual Environments
Scene exploration allows users to acquire scene knowledge on entering an unknown virtual environment. To support users in this endeavor, aided wayfinding strategies intentionally influence the user’s wayfinding decisions through, e.g., signs or virtual guides.
Our focus, however, is an unaided wayfinding strategy, in which we use virtual pedestrians as social cues to indirectly and subtly guide users through virtual environments during scene exploration. We shortly outline the required pedestrians’ behavior and results of a first feasibility study indicating the potential of the general approach.
» Show BibTeX
@inproceedings {Boensch2021a,
booktitle = {ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos},
editor = {Maiero, Jens and Weier, Martin and Zielasko, Daniel},
title = {{Indirect User Guidance by Pedestrians in Virtual Environments}},
author = {Bönsch, Andrea and Güths, Katharina and Ehret, Jonathan and Kuhlen, Torsten W.},
year = {2021},
publisher = {The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-159-5},
DOI = {10.2312/egve.20211336}
}
Poster: Virtual Optical Bench: A VR Learning Tool For Optical Design
The design of optical lens assemblies is a difficult process that requires lots of expertise. The teaching of this process today is done on physical optical benches, which are often too expensive for students to purchase. One way of circumventing these costs is to use software to simulate the optical bench. This work presents a virtual optical bench, which leverages real-time ray tracing in combination with VR rendering to create a teaching tool which creates a repeatable, non-hazardous, and feature-rich learning environment. The resulting application was evaluated in an expert review with 6 optical engineers.
» Show BibTeX
@INPROCEEDINGS{Pape2021,
author = {Pape, Sebastian and Bellgardt, Martin and Gilbert, David and König, Georg and Kuhlen, Torsten W.},
booktitle = {2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title = {Virtual Optical Bench: A VR learning tool for optical design},
year = {2021},
volume ={},
number = {},
pages = {635-636},
doi = {10.1109/VRW52623.2021.00200}
}
Poster: Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents
Conversational virtual agents, with and without visual representation, are becoming more present in our daily life, e.g. as intelligent virtual assistants on smart devices. To investigate the naturalness of both the speech and the nonverbal behavior of embodied conversational agents (ECAs), an interdisciplinary research group was initiated, consisting of phoneticians, computer scientists, and acoustic engineers. For a web-based pilot experiment, simple dialogs between a male and a female speaker were created, with three prosodic conditions. For condition 1, the dialog was created synthetically using a text-to-speech engine. In the other two prosodic conditions (2,3) human speakers were recorded with 2) the erroneous accentuation of the text-to-speech synthesis of condition 1, and 3) with a natural accentuation. Face tracking data of the recorded speakers was additionally obtained and applied as input data for the facial animation of the ECAs. Based on the recorded data, auralizations in a virtual acoustic environment were generated and presented as binaural signals to the participants either in combination with the visual representation of the ECAs as short videos or without any visual feedback. A preliminary evaluation of the participants’ responses to questions related to naturalness, presence, and preference is presented in this work.
@inproceedings{Aspoeck2021,
author = {Asp\"{o}ck, Lukas and Ehret, Jonathan and Baumann, Stefan and B\"{o}nsch, Andrea and R\"{o}hr, Christine T. and Grice, Martine and Kuhlen, Torsten W. and Fels, Janina},
title = {Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents},
year = {2021},
note = {Hybride Konferenz},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
url = {https://vr.rwth-aachen.de/publication/02207/}
}
Virtual Reality and Mixed Reality
We are pleased to present in this LNCS volume the scientific proceedings of EuroXR 2021, the 18th EuroXR International Conference, organized by CNR-STIIMA, Italy, which took place during November 24–26, 2021. Due to the COVID-19 pandemic, EuroXR 2021 was held as a virtual conference to guarantee the best audience while maintaining the safest conditions for the attendees. This conference follows a series of successful international conferences initiated in 2004 by the INTUITION Network of Excellence in Virtual and Augmented Reality, supported by the European Commission until 2008. Embedded within the Joint Virtual Reality Conference (JVRC) from 2009 to 2013, it was known as the EuroVR International Conference from 2014 and until last year. The focus of these conferences is to present, each year, novel Virtual Reality (VR) through to Mixed Reality (MR) technologies, also named eXtended Reality (XR), including software systems, immersive rendering technologies, 3D user interfaces, and applications. These conferences aim to foster European engagement between industry, academia, and the public sector, to promote the development and deployment of XR in new and emerging, but also existing, fields. Since 2017, EuroXR (https://www.euroxr-association.org/) has collaborated with Springer to publish the papers of the scientific track of our annual conference. To increase the excellence of this applied research conference, which is basically oriented toward new uses of XR technologies, we established a set of committees including Scientific Program chairs leading an International Program Committee (IPC) made up of international experts in the field. Eight scientific full papers have been selected to be published in the proceedings of EuroXR 2021, presenting original and unpublished papers documenting new XR research contributions, practice and experience, or novel applications. Five long papers and three medium papers were selected from 22 submissions, resulting in an acceptance rate of 36%. Within a double-blind peer reviewing process, three members of the IPC with the help of some external expert reviewers evaluated each submission. From the review reports of the IPC, the Scientific Program chairs took the final decisions. The selected scientific papers are organized in this LNCS volume according to four topical parts: Perception and Cognition, Interactive Techniques, Tracking and Rendering, and Use Case and User Study. Moreover, with the agreement of Springer and for the third year, the last part of this LNCS volume gathers scientific poster/short papers, presenting work in progress or other scientific contributions, such as ideas for unimplemented and/or unusual systems. Within another double-blind peer reviewing process based on two review reports from IPC members for each submission, the Scientific Program chairs selected four scientific poster/short papers from nine submissions (an acceptance rate of 44%). Along with the scientific track, presenting advanced research works (scientific full papers) or research works in progress (scientific poster/short papers) in this LNCS volume, several keynote speakers were invited to EuroXR 2021. Additionally, an application track, subdivided into talk, poster, and demo sessions, was organized for participants to report on the current use of XR technologies in multiple fields. We would like to thank the IPC members and external reviewers for their insightful reviews, which ensured the high quality of the papers selected for the scientific track of EuroXR 2021. Furthermore, we would like to thank the Application chairs, the Demo and Exhibition chairs, and the local organizers of EuroXR 2021. We are also especially grateful to Anna Kramer (Assistant Editor, Computer Science Editorial, Springer) and Volha Shaparava (Springer OCS Support) for their support and advice during the preparation of this LNCS volume.
September 2021
Patrick Bourdot Mariano Alcañiz Raya Pablo Figueroa Victoria Interrante Torsten W. Kuhlen Dirk Reiners
Listening to, and remembering conversations between two talkers: Cognitive research using embodied conversational agents in audiovisual virtual environments
In the AUDICTIVE project about listening to, and remembering the content of conversations between two talkers we aim to investigate the combined effects of potentially performance-relevant but scarcely addressed audiovisual cues on memory and comprehension for running speech. Our overarching methodological approach is to develop an audiovisual Virtual Reality testing environment that includes embodied Virtual Agents (VAs). This testing environment will be used in a series of experiments to research the basic aspects of audiovisual cognitive performance in a close(r)-to-real-life setting. We aim to provide insights into the contribution of acoustical and visual cues on the cognitive performance, user experience, and presence as well as quality and vibrancy of VR applications, especially those with a social interaction focus. We will study the effects of variations in the audiovisual ’realism’ of virtual environments on memory and comprehension of multi-talker conversations and investigate how fidelity characteristics in audiovisual virtual environments contribute to the realism and liveliness of social VR scenarios with embodied VAs. Additionally, we will study the suitability of text memory, comprehension measures, and subjective judgments to assess the quality of experience of a VR environment. First steps of the project with respect to the general idea of AUDICTIVE are presented.
@ inproceedings {Fels2021,
author = {Fels, Janina and Ermert, Cosima A. and Ehret, Jonathan and Mohanathasan, Chinthusa and B\"{o}nsch, Andrea and Kuhlen, Torsten W. and Schlittmeier, Sabine J.},
title = {Listening to, and Remembering Conversations between Two Talkers: Cognitive Research using Embodied Conversational Agents in Audiovisual Virtual Environments},
address = {Berlin},
publisher = {Deutsche Gesellschaft für Akustik e.V. (DEGA)},
pages = {1328-1331},
year = {2021},
booktitle = {[Fortschritte der Akustik - DAGA 2021, DAGA 2021, 2021-08-15 - 2021-08-18, Wien, Austria]},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
url = {https://vr.rwth-aachen.de/publication/02206/}
}
Talk: Speech Source Directivity for Embodied Conversational Agents
Embodied conversational agents (ECAs) are computer-controlled characters who communicate with a human using natural language. Being represented as virtual humans, ECAs are often utilized in domains such as training, therapy, or guided tours while being embedded in an immersive virtual environment. Having plausible speech sound is thereby desirable to improve the overall plausibility of these virtual-reality-based simulations. In an audiovisual VR experiment, we investigated the impact of directional radiation for the produced speech on the perceived naturalism. Furthermore, we examined how directivity filters influence the perceived social presence of participants in interactions with an ECA. Therefor we varied the source directivity between 1) being omnidirectional, 2) featuring the average directionality of a human speaker, and 3) dynamically adapting to the currently produced phonemes. Our results indicate that directionality of speech is noticed and rated as more natural. However, no significant change of perceived naturalness could be found when adding dynamic, phoneme-dependent directivity. Furthermore, no significant differences on social presence were measurable between any of the three conditions.
» Show BibTeX
Bibtex:
@misc{Ehret2021b,
author = {Ehret, Jonathan and Aspöck, Lukas and B\"{o}nsch, Andrea and Fels, Janina and Kuhlen, Torsten W.},
title = {Speech Source Directivity for Embodied Conversational Agents},
publisher = {IHTA, Institute for Hearing Technology and Acoustics},
year = {2021},
note = {Hybride Konferenz},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
subtyp = {Video},
url = {https://vr.rwth-aachen.de/publication/02205/}
}
An Immersive Node-Link Visualization of Artificial Neural Networks for Machine Learning Experts
The black box problem of artificial neural networks (ANNs) is still a very relevant issue. When communicating basic concepts of ANNs, they are often depicted as node-link diagrams. Despite this being a straight forward way to visualize them, it is rarely used outside an educational context. However, we hypothesize that large-scale node-link diagrams of full ANNs could be useful even to machine learning experts. Hence, we present a visualization tool that depicts convolutional ANNs as node-link diagrams using immersive virtual reality. We applied our tool to a use-case in the field of machine learning research and adapted it to the specific challenges. Finally, we performed an expert review to evaluate the usefulness of our visualization. We found that our node-link visualization of ANNs was perceived as helpful in this professional context.
@inproceedings{Bellgardt2020a,
author = {Bellgardt, Martin and Scheiderer, Christian and Kuhlen, Torsten W.},
booktitle = {Proc. of IEEE AIVR}, title = {{An Immersive Node-Link Visualization of Artificial Neural Networks for Machine Learning Experts}},
year = {2020}
}
Inferring a User’s Intent on Joining or Passing by Social Groups
Modeling the interactions between users and social groups of virtual agents (VAs) is vital in many virtual-reality-based applications. However, only little research on group encounters has been conducted yet. We intend to close this gap by focusing on the distinction between joining and passing-by a group. To enhance the interactive capacity of VAs in these situations, knowing the user’s objective is required to showreasonable reactions. To this end,we propose a classification scheme which infers the user’s intent based on social cues such as proxemics, gazing and orientation, followed by triggering believable, non-verbal actions on the VAs.We tested our approach in a pilot study with overall promising results and discuss possible improvements for further studies.
» Show BibTeX
@inproceedings{10.1145/3383652.3423862,
author = {B\"{o}nsch, Andrea and Bluhm, Alexander R. and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {Inferring a User's Intent on Joining or Passing by Social Groups},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423862},
doi = {10.1145/3383652.3423862},
abstract = {Modeling the interactions between users and social groups of virtual agents (VAs) is vital in many virtual-reality-based applications. However, only little research on group encounters has been conducted yet. We intend to close this gap by focusing on the distinction between joining and passing-by a group. To enhance the interactive capacity of VAs in these situations, knowing the user's objective is required to show reasonable reactions. To this end, we propose a classification scheme which infers the user's intent based on social cues such as proxemics, gazing and orientation, followed by triggering believable, non-verbal actions on the VAs. We tested our approach in a pilot study with overall promising results and discuss possible improvements for further studies.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {10},
numpages = {8},
keywords = {virtual agents, joining a group, social groups, virtual reality},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}
Evaluating the Influence of Phoneme-Dependent Dynamic Speaker Directivity of Embodied Conversational Agents’ Speech
Generating natural embodied conversational agents within virtual spaces crucially depends on speech sounds and their directionality. In this work, we simulated directional filters to not only add directionality, but also directionally adapt each phoneme. We therefore mimic reality where changing mouth shapes have an influence on the directional propagation of sound. We conducted a study (n = 32) evaluating naturalism ratings, preference and distinguishability of omnidirectional speech auralization compared to static and dynamic, phoneme-dependent directivities. The results indicated that participants cannot distinguish dynamic from static directivity. Furthermore, participants’ preference ratings aligned with their naturalism ratings. There was no unanimity, however, with regards to which auralization is the most natural.
» Show BibTeX
@inproceedings{10.1145/3383652.3423863,
author = {Ehret, Jonathan and Stienen, Jonas and Brozdowski, Chris and B\"{o}nsch, Andrea and Mittelberg, Irene and Vorl\"{a}nder, Michael and Kuhlen, Torsten W.},
title = {Evaluating the Influence of Phoneme-Dependent Dynamic Speaker Directivity of Embodied Conversational Agents' Speech},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423863},
doi = {10.1145/3383652.3423863},
abstract = {Generating natural embodied conversational agents within virtual spaces crucially depends on speech sounds and their directionality. In this work, we simulated directional filters to not only add directionality, but also directionally adapt each phoneme. We therefore mimic reality where changing mouth shapes have an influence on the directional propagation of sound. We conducted a study (n = 32) evaluating naturalism ratings, preference and distinguishability of omnidirectional speech auralization compared to static and dynamic, phoneme-dependent directivities. The results indicated that participants cannot distinguish dynamic from static directivity. Furthermore, participants' preference ratings aligned with their naturalism ratings. There was no unanimity, however, with regards to which auralization is the most natural.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {17},
numpages = {8},
keywords = {phoneme-dependent directivity, directional 3D sound, speech, embodied conversational agents, virtual acoustics},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}
Rilievo: Artistic Scene Authoring via Interactive Height Map Extrusion in VR
The authors present a virtual authoring environment for artistic creation in VR. It enables the effortless conversion of 2D images into volumetric 3D objects. Artistic elements in the input material are extracted with a convenient VR-based segmentation tool. Relief sculpting is then performed by interactively mixing different height maps. These are automatically generated from the input image structure and appearance. A prototype of the tool is showcased in an analog-virtual artistic workflow in collaboration with a traditional painter. It combines the expressiveness of analog painting and sculpting with the creative freedom of spatial arrangement in VR.
@article{eroglu2020rilievo,
title={Rilievo: Artistic Scene Authoring via Interactive Height Map Extrusion in VR},
author={Eroglu, Sevinc and Schmitz, Patric and Martinez, Carlos Aguilera and Rusch, Jana and Kobbelt, Leif and Kuhlen, Torsten W},
journal={Leonardo},
volume={53},
number={4},
pages={438--441},
year={2020},
publisher={MIT Press}
}
Feature Tracking by Two-Step Optimization
Tracking the temporal evolution of features in time-varying data is a key method in visualization. For typical feature definitions, such as vortices, objects are sparsely distributed over the data domain. In this paper, we present a novel approach for tracking both sparse and space-filling features. While the former comprise only a small fraction of the domain, the latter form a set of objects whose union covers the domain entirely while the individual objects are mutually disjunct. Our approach determines the assignment of features between two successive time-steps by solving two graph optimization problems. It first resolves one-to-one assignments of features by computing a maximum-weight, maximum-cardinality matching on a weighted bi-partite graph. Second, our algorithm detects events by creating a graph of potentially conflicting event explanations and finding a weighted, independent set in it. We demonstrate our method's effectiveness on synthetic and simulation data sets, the former of which enables quantitative evaluation because of the availability of ground-truth information. Here, our method performs on par or better than a well-established reference algorithm. In addition, manual visual inspection by our collaborators confirm the results' plausibility for simulation data.
@ARTICLE{Schnorr2018,
author = {Andrea Schnorr and Dirk N. Helmrich and Dominik Denker and Torsten W. Kuhlen and Bernd Hentschel},
title = {{F}eature {T}racking by {T}wo-{S}tep {O}ptimization},
journal = TVCG,
volume = {preprint available online},
doi = {https://doi.org/10.1109/TVCG.2018.2883630},
year = 2018,
}
The Impact of a Virtual Agent’s Non-Verbal Emotional Expression on a User’s Personal Space Preferences
Virtual-reality-based interactions with virtual agents (VAs) are likely subject to similar influences as human-human interactions. In either real or virtual social interactions, interactants try to maintain their personal space (PS), an ubiquitous, situative, flexible safety zone. Building upon larger PS preferences to humans and VAs with angry facial expressions, we extend the investigations to whole-body emotional expressions. In two immersive settings–HMD and CAVE–66 males were approached by an either happy, angry, or neutral male VA. Subjects preferred a larger PS to the angry VA when being able to stop him at their convenience (Sample task), replicating previous findings, and when being able to actively avoid him (PassBy task). In the latter task, we also observed larger distances in the CAVE than in the HMD.
» Show BibTeX
@inproceedings{10.1145/3383652.3423888,
author = {B\"{o}nsch, Andrea and Radke, Sina and Ehret, Jonathan and Habel, Ute and Kuhlen, Torsten W.},
title = {The Impact of a Virtual Agent's Non-Verbal Emotional Expression on a User's Personal Space Preferences},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423888},
doi = {10.1145/3383652.3423888},
abstract = {Virtual-reality-based interactions with virtual agents (VAs) are likely subject to similar influences as human-human interactions. In either real or virtual social interactions, interactants try to maintain their personal space (PS), an ubiquitous, situative, flexible safety zone. Building upon larger PS preferences to humans and VAs with angry facial expressions, we extend the investigations to whole-body emotional expressions. In two immersive settings-HMD and CAVE-66 males were approached by an either happy, angry, or neutral male VA. Subjects preferred a larger PS to the angry VA when being able to stop him at their convenience (Sample task), replicating previous findings, and when being able to actively avoid him (Pass By task). In the latter task, we also observed larger distances in the CAVE than in the HMD.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {12},
numpages = {8},
keywords = {personal space, virtual reality, emotions, virtual agents},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}
Immersive Sketching to Author Crowd Movements in Real-time
the flow of virtual crowds in a direct and interactive manner. Here, options to redirect a flow by sketching barriers, or guiding entities based on a sketched network of connected sections are provided. As virtual crowds are increasingly often embedded into virtual reality (VR) applications, 3D authoring is of interest.
In this preliminary work, we thus present a sketch-based approach for VR. First promising results of a proof-of-concept are summarized and improvement suggestions, extensions, and future steps are discussed.
» Show BibTeX
@inproceedings{10.1145/3383652.3423883,
author = {B\"{o}nsch, Andrea and Barton, Sebastian J. and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {Immersive Sketching to Author Crowd Movements in Real-Time},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423883},
doi = {10.1145/3383652.3423883},
abstract = {Sketch-based interfaces in 2D screen space allow to efficiently author the flow of virtual crowds in a direct and interactive manner. Here, options to redirect a flow by sketching barriers, or guiding entities based on a sketched network of connected sections are provided.As virtual crowds are increasingly often embedded into virtual reality (VR) applications, 3D authoring is of interest. In this preliminary work, we thus present a sketch-based approach for VR. First promising results of a proof-of-concept are summarized and improvement suggestions, extensions, and future steps are discussed.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {11},
numpages = {3},
keywords = {virtual crowds, virtual reality, sketch-based interface, authoring},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}
Calibratio - A Small, Low-Cost, Fully Automated Motion-to-Photon Measurement Device
Since the beginning of the design and implementation of virtual environments, these systems have been built to give the users the best possible experience. One detrimental factor for the user experience was shown to be a high end-to-end latency, here measured as motionto-photon latency, of the system. Thus, a lot of research in the past was focused on the measurement and minimization of this latency in virtual environments. Most existing measurement-techniques require either expensive measurement hardware like an oscilloscope, mechanical components like a pendulum or depend on manual evaluation of samples. This paper proposes a concept of an easy to build, low-cost device consisting of a microcontroller, servo motor and a photo diode to measure the motion-to-photon latency in virtual reality environments fully automatically. It is placed or attached to the system, calibrates itself and is controlled/monitored via a web interface. While the general concept is applicable to a variety of VR technologies, this paper focuses on the context of CAVE-like systems.
» Show BibTeX
@InProceedings{Pape2020a,
author = {Sebastian Pape and Marcel Kr\"{u}ger and Jan M\"{u}ller and Torsten W. Kuhlen},
title = {{Calibratio - A Small, Low-Cost, Fully Automated Motion-to-Photon Measurement Device}},
booktitle = {10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2020},
month={March}
}
Joint Dual-Tasking in VR: Outlining the Behavioral Design of Interactive Human Companions Who Walk and Talk with a User
To resemble realistic and lively places, virtual environments are increasingly often enriched by virtual populations consisting of computer-controlled, human-like virtual agents. While the applications often provide limited user-agent interaction based on, e.g., collision avoidance or mutual gaze, complex user-agent dynamics such as joint locomotion combined with a secondary task, e.g., conversing, are rarely considered yet. These dual-tasking situations, however, are beneficial for various use-cases: guided tours and social simulations will become more realistic and engaging if a user is able to traverse a scene as a member of a social group, while platforms to study crowd and walking behavior will become more powerful and informative. To this end, this presentation deals with different areas of interaction dynamics, which need to be combined for modeling dual-tasking with virtual agents. Areas covered are kinematic parameters for the navigation behavior, group shapes in static and mobile situations as well as verbal and non-verbal behavior for conversations.
» Show BibTeX
@InProceedings{Boensch2020a,
author = {Andrea B\"{o}nsch and Torsten W. Kuhlen},
title = {{Joint Dual-Tasking in VR: Outlining the Behavioral Design of Interactive Human Companions Who Walk and Talk with a User}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2020},
month={March}
}
When Spatial Devices are not an Option : Object Manipulation in Virtual Reality using 2D Input Devices
With the advent of low-cost virtual reality hardware, new applications arise in professional contexts. These applications have requirements that can differ from the usual premise when developing immersive systems. In this work, we explore the idea that spatial controllers might not be usable for practical reasons, even though they are the best interaction device for the task. Such a reason might be fatigue, as applications might be used over a long period of time. Additionally, some people might have even more difficulty lifting their hands, due to a disability. Hence, we attempt to measure how much the performance in a spatial interaction task decreases when using classical 2D interaction devices instead of a spatial controller. For this, we developed an interaction technique that uses 2D inputs and borrows principles from desktop interaction. We show that our interaction technique is slower to use than the state-of-the-art spatial interaction but is not much worse regarding precision and user preference.
» Show BibTeX
@inproceedings{Bellgardt2020,
author = {Bellgardt, Martin and Krause, Niklas and Kuhlen, Torsten W.},
booktitle = {Proc. of GI VR / AR Workshop},
title = {{When Spatial Devices are not an Option : Object Manipulation in Virtual Reality using 2D Input Devices}},
DOI = {10.18420/vrar2020_9}
year = {2020}
}
Towards a Graphical User Interface for Exploring and Fine-Tuning Crowd Simulations
Simulating a realistic navigation of virtual pedestrians through virtual environments is a recurring subject of investigations. The various mathematical approaches used to compute the pedestrians’ paths result, i.a., in different computation-times and varying path characteristics. Customizable parameters, e.g., maximal walking speed or minimal interpersonal distance, add another level of complexity. Thus, choosing the best-fitting approach for a given environment and use-case is non-trivial, especially for novice users.
To facilitate the informed choice of a specific algorithm with a certain parameter set, crowd simulation frameworks such as Menge provide an extendable collection of approaches with a unified interface for usage. However, they often miss an elaborated visualization with high informative value accompanied by visual analysis methods to explore the complete simulation data in more detail – which is yet required for an informed choice. Benchmarking suites such as SteerBench are a helpful approach as they objectively analyze crowd simulations, however they are too tailored to specific behavior details. To this end, we propose a preliminary design of an advanced graphical user interface providing a 2D and 3D visualization of the crowd simulation data as well as features for time navigation and an overall data exploration.
» Show BibTeX
@InProceedings{Boensch2020b,
author = {Andrea B\"{o}nsch and Marcel Jonda and Jonathan Ehret and Torsten W. Kuhlen},
title = {{Towards a Graphical User Interface for Exploring and Fine-Tuning Crowd Simulations}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2020},
month={March}
}
Talk: Insite: A Generalized Pipeline for In-transit Visualization and Analysis
Neuronal network simulators are essential to computational neuroscience, enabling the study of the nervous system through in-silico experiments. Through utilization of high-performance computing resources, these simulators are able to simulate increasingly complex and large networks of neurons today. It also creates new challenges for the analysis and visualization of such simulations. In-situ and in-transport strategies are popular approaches in these scenarios. They enable live monitoring of running simulations and parameter adjustment in the case of erroneous configurations which can save valuable compute resources.
This talk will present the current status of our pipeline for in-transport analysis and visualization of neuronal network simulator data. The pipeline is able to couple with NEST along other simulators with data management (querying, filtering and merging) from multiple simulator instances. Finally, the data is passed to end-user applications for visualization and analysis. The goal is to be integrated into third party tools such as the multi-view visual analysis toolkit ViSimpl.
Voxel-Based Edge Bundling Trough Direction-Aware Kernel Smoothing
Relational data with a spatial embedding and depicted as node-link diagram is very common, e.g., in neuroscience, and edge bundling is one way to increase its readability or reveal hidden structures. This article presents a 3D extension to kernel density estimation-based edge bundling that is meant to be used in an interactive immersive analysis setting. This extension adds awareness of the edges’ direction when using kernel smoothing and thus implicitly supports both directed and undirected graphs. The method generates explicit bundles of edges, which can be analyzed and visualized individually and as sufficient as possible for a given application context, while it scales linearly with the input size.
@article{ZIELASKO2019,
title = "Voxel-based edge bundling through direction-aware kernel smoothing",
journal = "Computers & Graphics",
volume = "83",
pages = "87 - 96",
year = "2019",
issn = "0097-8493",
doi = "https://doi.org/10.1016/j.cag.2019.06.008",
url = "http://www.sciencedirect.com/science/article/pii/S0097849319301025",
author = "Daniel Zielasko and Xiaoqing Zhao and Ali Can Demiralp and Torsten W. Kuhlen and Benjamin Weyers"}
Feature Tracking Utilizing a Maximum-Weight Independent Set Problem
Tracking the temporal evolution of features in time-varying data remains a combinatorially challenging problem. A recent method models event detection as a maximum-weight independent set problem on a graph representation of all possible explanations [35]. However, optimally solving this problem is NP-hard in the general case. Following the approach by Schnorr et al., we propose a new algorithm for event detection. Our algorithm exploits the modelspecific structure of the independent set problem. Specifically, we show how to traverse potential explanations in such a way that a greedy assignment provides reliably good results. We demonstrate the effectiveness of our approach on synthetic and simulation data sets, the former of which include ground-truth tracking information which enable a quantitative evaluation. Our results are within 1% of the theoretical optimum and comparable to an approximate solution provided by a state-of-the-art optimization package. At the same time, our algorithm is significantly faster.
@InProceedings{Schnorr2019,
author = {Andrea Schnorr, Dirk Norbert Helmrich, Hank Childs, Torsten Wolfgang Kuhlen, Bernd Hentschel},
title = {{Feature Tracking Utilizing a Maximum-Weight Independent Set Problem}},
booktitle = {9th IEEE Symposium on Large Data Analysis and Visualization},
year = {2019}
}
Influence of Directivity on the Perception of Embodied Conversational Agents' Speech
Embodied conversational agents become more and more important in various virtual reality applications, e.g., as peers, trainers or therapists. Besides their appearance and behavior, appropriate speech is required for them to be perceived as human-like and realistic. Additionally to the used voice signal, also its auralization in the immersive virtual environment has to be believable. Therefore, we investigated the effect of adding directivity to the speech sound source. Directivity simulates the orientation dependent auralization with regard to the agent's head orientation. We performed a one-factorial user study with two levels (n=35) to investigate the effect directivity has on the perceived social presence and realism of the agent's voice. Our results do not indicate any significant effects regarding directivity on both variables covered. We account this partly to an overall too low realism of the virtual agent, a not overly social utilized scenario and generally high variance of the examined measures. These results are critically discussed and potential further research questions and study designs are identified.
» Show BibTeX
@inproceedings{Wendt2019,
author = {Wendt, Jonathan and Weyers, Benjamin and Stienen, Jonas and B\"{o}nsch, Andrea and Vorl\"{a}nder, Michael and Kuhlen, Torsten W.},
title = {Influence of Directivity on the Perception of Embodied Conversational Agents' Speech},
booktitle = {Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents},
series = {IVA '19},
year = {2019},
isbn = {978-1-4503-6672-4},
location = {Paris, France},
pages = {130--132},
numpages = {3},
url = {http://doi.acm.org/10.1145/3308532.3329434},
doi = {10.1145/3308532.3329434},
acmid = {3329434},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {directional 3d sound, social presence, virtual acoustics, virtual agents},
}
Passive Haptic Menus for Desk-Based and HMD-Projected Virtual Reality
In this work we evaluate the impact of passive haptic feedback on touch-based menus, given the constraints and possibilities of a seated, desk-based scenario in VR. Therefore, we compare a menu that once is placed on the surface of a desk and once mid-air on a surface in front of the user. The study design is completed by two conditions without passive haptic feedback. In the conducted user study (n = 33) we found effects of passive haptics (present vs- non-present) and menu alignment (desk vs. mid-air) on the task performance and subjective look & feel, however the race between the conditions was close. An overall winner was the mid-air menu with passive haptic feedback, which however raises hardware requirements.
@inproceedings{zielasko2019menu,
title={{Passive Haptic Menus for Desk-Based and HMD-Projected Virtual Reality}},
author={Zielasko, Daniel and Kr{\"u}ger Marcel and Weyers, Benjamin and Kuhlen, Torsten W},
booktitle={Proc. of IEEE VR Workshop on Everyday Virtual Reality},
year={2019}
}
A Non-Stationary Office Desk Substitution for Desk-Based and HMD-Projected Virtual Reality
The ongoing migration of HMDs to the consumer market also allows the integration of immersive environments into analysis workflows that are often bound to an (office) desk. However, a critical factor when considering VR solutions for professional applications is the prevention of cybersickness. In the given scenario the user is usually seated and the surrounding real world environment is very dominant, where the most dominant part is maybe the desk itself. Including this desk in the virtual environment could serve as a resting frame and thus reduce cybersickness next to a lot of further possibilities. In this work, we evaluate the feasibility of a substitution like this in the context of a visual data analysis task involving travel, and measure the impact on cybersickness as well as the general task performance and presence. In the conducted user study (n=52), surprisingly, and partially in contradiction to existing work, we found no significant differences for those core measures between the control condition without a virtual table and the condition containing a virtual table. However, the results also support the inclusion of a virtual table in desk-based use cases.
@inproceedings{zielasko2019travel,
title={{A Non-Stationary Office Desk Substitution for Desk-Based and HMD-Projected Virtual Reality}},
author={Zielasko, Daniel and Weyers, Benjamin and Kuhlen, Torsten W},
booktitle ={Proc. of IEEE VR Workshop on Immersive Sickness Prevention},
year={2019}
}
Evaluation of Omnipresent Virtual Agents Embedded as Temporarily Required Assistants in Immersive Environments
When designing the behavior of embodied, computer-controlled, human-like virtual agents (VA) serving as temporarily required assistants in virtual reality applications, two linked factors have to be considered: the time the VA is visible in the scene, defined as presence time (PT), and the time till the VA is actually available for support on a user’s calling, defined as approaching time (AT).
Complementing a previous research on behaviors with a low VA’s PT, we present the results of a controlled within-subjects study investigating behaviors by which the VA is always visible, i.e., behaviors with a high PT. The two behaviors affecting the AT tested are: following, a design in which the VA is omnipresent and constantly follows the users, and busy, a design in which theVAis self-reliantly spending time nearby the users and approaches them only if explicitly asked for. The results indicate that subjects prefer the following VA, a behavior which also leads to slightly lower execution times compared to busy.
» Show BibTeX
@InProceedings{Boensch2019c,
author = {Andrea B\"{o}nsch and Jan Hoffmann and Jonathan Wendt and Torsten W. Kuhlen},
title = {{Evaluation of Omnipresent Virtual Agents Embedded as Temporarily Required Assistants in Immersive Environments}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2019},
doi={10.1109/VHCIE.2019.8714726},
month={March}
}
An Empirical Lab Study Investigating If Higher Levels of Immersion Increase the Willingness to Donate
Technological innovations have a growing relevance for charitable donations, as new technologies shape the way we perceive and approach digital media. In a between-subjects study with sixty-one volunteers, we investigated whether a higher degree of immersion for the potential donor can yield more donations for non-governmental organizations. Therefore, we compared the donations given after experiencing a video-based, an augmented-reality-based, or a virtual-reality-based scenery with a virtual agent, representing a war victimized Syrian boy talking about his losses. Our initial results indicate that the immersion has no impact. However, the donor’s perceived innovativeness of the used technology might be an influencing factor.
» Show BibTeX
@InProceedings{Boensch2019b,
author = {Andrea B\"{o}nsch and Alexander Kies and Moritz Jörling and Stefanie Paluch and Torsten W. Kuhlen},
title = {{An Empirical Lab Study Investigating If Higher Levels of Immersion Increase the Willingness to Donatee}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2019}
pages={1-4},
doi={10.1109/VHCIE.2019.8714622},
month={March}
}
Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique ?nal state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since ?nding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.
@ARTICLE{10.3389/fninf.2018.00032,
AUTHOR={Nowke, Christian and Diaz-Pier, Sandra and Weyers, Benjamin and Hentschel, Bernd and Morrison, Abigail and Kuhlen, Torsten W. and Peyser, Alexander},
TITLE={Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation},
JOURNAL={Frontiers in Neuroinformatics},
VOLUME={12},
PAGES={32},
YEAR={2018},
URL={https://www.frontiersin.org/article/10.3389/fninf.2018.00032},
DOI={10.3389/fninf.2018.00032},
ISSN={1662-5196},
ABSTRACT={Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find.
Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories.
Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models.
In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases -- the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.}
}
VIOLA : a Multi-Purpose and Web-Based Visualization Tool for Neuronal-Network Simulation Output
Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity, and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. In order to cover the surface area captured by today’s experimental techniques and to achieve sufficient self-consistency, such models contain millions of nerve cells. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based, and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model incorporating distance-dependent connectivity subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of speci?c aspects of the data. Interactive multi-view analysis therefore assists existing data Analysis workflows. Furthermore,ongoingeffortsincludingtheEuropeanHumanBrainProjectaim at providing online user portals for integrated model development, simulation, analysis, and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.
@ARTICLE{10.3389/fninf.2018.00075,
AUTHOR={Senk, Johanna and Carde, Corto and Hagen, Espen and Kuhlen, Torsten W. and Diesmann, Markus and Weyers, Benjamin},
TITLE={VIOLA—A Multi-Purpose and Web-Based Visualization Tool for Neuronal-Network Simulation Output},
JOURNAL={Frontiers in Neuroinformatics},
VOLUME={12},
PAGES={75},
YEAR={2018},
URL={https://www.frontiersin.org/article/10.3389/fninf.2018.00075},
DOI={10.3389/fninf.2018.00075},
ISSN={1662-5196},
ABSTRACT={Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. In order to cover the surface area captured by today's experimental techniques and to achieve sufficient self-consistency, such models contain millions of nerve cells. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model incorporating distance-dependent connectivity subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Interactive multi-view analysis therefore assists existing data analysis workflows. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.}
}
Immersive Analytics Applications in Life and Health Sciences
Life and health sciences are key application areas for immersive analytics. This spans a broad range including medicine (e.g., investigations in tumour boards), pharmacology (e.g., research of adverse drug reactions), biology (e.g., immersive virtual cells) and ecology (e.g., analytics of animal behaviour). We present a brief overview of general applications of immersive analytics in the life and health sciences, and present a number of applications in detail, such as immersive analytics in structural biology, in medical image analytics, in neurosciences, in epidemiology, in biological network analysis and for virtual cells.
@Inbook{Czauderna2018,
author="Czauderna, Tobias
and Haga, Jason
and Kim, Jinman
and Klapperst{\"u}ck, Matthias
and Klein, Karsten
and Kuhlen, Torsten
and Oeltze-Jafra, Steffen
and Sommer, Bj{\"o}rn
and Schreiber, Falk",
editor="Marriott, Kim
and Schreiber, Falk
and Dwyer, Tim
and Klein, Karsten
and Riche, Nathalie Henry
and Itoh, Takayuki
and Stuerzlinger, Wolfgang
and Thomas, Bruce H.",
title="Immersive Analytics Applications in Life and Health Sciences",
bookTitle="Immersive Analytics",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="289--330",
abstract="Life and health sciences are key application areas for immersive analytics. This spans a broad range including medicine (e.g., investigations in tumour boards), pharmacology (e.g., research of adverse drug reactions), biology (e.g., immersive virtual cells) and ecology (e.g., analytics of animal behaviour). We present a brief overview of general applications of immersive analytics in the life and health sciences, and present a number of applications in detail, such as immersive analytics in structural biology, in medical image analytics, in neurosciences, in epidemiology, in biological network analysis and for virtual cells.",
isbn="978-3-030-01388-2",
doi="10.1007/978-3-030-01388-2_10",
url="https://doi.org/10.1007/978-3-030-01388-2_10"
}
Exploring Immersive Analytics for Built Environments
This chapter overviews the application of immersive analytics to simulations of built environments through three distinct case studies. The first case study examines an immersive analytics approach based upon the concept of “Virtual Production Intelligence” for virtual prototyping tools throughout the planning phase of complete production sites. The second study addresses the 3D simulation of an extensive urban area and the attendant immersive analytic considerations in an interactive model of a sustainable city. The third study reviews how immersive analytic overlays have been applied for virtual heritage in the reconstruction and crowd simulation of the medieval Cambodian temple complex of Angkor Wat.
@Inbook{Chandler2018,
author="Chandler, Tom
and Morgan, Thomas
and Kuhlen, Torsten Wolfgang",
editor="Marriott, Kim
and Schreiber, Falk
and Dwyer, Tim
and Klein, Karsten
and Riche, Nathalie Henry
and Itoh, Takayuki
and Stuerzlinger, Wolfgang
and Thomas, Bruce H.",
title="Exploring Immersive Analytics for Built Environments",
bookTitle="Immersive Analytics",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="331--357",
abstract="This chapter overviews the application of immersive analytics to simulations of built environments through three distinct case studies. The first case study examines an immersive analytics approach based upon the concept of ``Virtual Production Intelligence'' for virtual prototyping tools throughout the planning phase of complete production sites. The second study addresses the 3D simulation of an extensive urban area (191 km{\$}{\$}^2{\$}{\$}) and the attendant immersive analytic considerations in an interactive model of a sustainable city. The third study reviews how immersive analytic overlays have been applied for virtual heritage in the reconstruction and crowd simulation of the medieval Cambodian temple complex of Angkor Wat.",
isbn="978-3-030-01388-2",
doi="10.1007/978-3-030-01388-2_11",
url="https://doi.org/10.1007/978-3-030-01388-2_11"
}
Interactive Visual Analysis of Multi-dimensional Metamodels
In the simulation of manufacturing processes, complex models are used to examine process properties. To save computation time, so-called metamodels serve as surrogates for the original models. Metamodels are inherently difficult to interpret, because they resemble multi-dimensional functions f : Rn -> Rm that map configuration parameters to production criteria. We propose a multi-view visualization application called memoSlice that composes several visualization techniques, specially adapted to the analysis of metamodels. With our application, we enable users to improve their understanding of a metamodel, but also to easily optimize processes. We put special attention on providing a high level of interactivity by realizing specialized parallelization techniques to provide timely feedback on user interactions. In this paper we outline these parallelization techniques and demonstrate their effectivity by means of micro and high level measurements.
@inproceedings {pgv.20181098,
booktitle = {Eurographics Symposium on Parallel Graphics and Visualization},
editor = {Hank Childs and Fernando Cucchietti},
title = {{Interactive Visual Analysis of Multi-dimensional Metamodels}},
author = {Gebhardt, Sascha and Pick, Sebastian and Hentschel, Bernd and Kuhlen, Torsten Wolfgang},
year = {2018},
publisher = {The Eurographics Association},
ISSN = {1727-348X},
ISBN = {978-3-03868-054-3},
DOI = {10.2312/pgv.20181098}
}
Social VR: How Personal Space is Affected by Virtual Agents’ Emotions
Personal space (PS), the flexible protective zone maintained around oneself, is a key element of everyday social interactions. It, e.g., affects people's interpersonal distance and is thus largely involved when navigating through social environments. However, the PS is regulated dynamically, its size depends on numerous social and personal characteristics and its violation evokes different levels of discomfort and physiological arousal. Thus, gaining more insight into this phenomenon is important.
We contribute to the PS investigations by presenting the results of a controlled experiment in a CAVE, focusing on German males in the age of 18 to 30 years. The PS preferences of 27 participants have been sampled while they were approached by either a single embodied, computer-controlled virtual agent (VA) or by a group of three VAs. In order to investigate the influence of a VA's emotions, we altered their facial expression between angry and happy. Our results indicate that the emotion as well as the number of VAs approaching influence the PS: larger distances are chosen to angry VAs compared to happy ones; single VAs are allowed closer compared to the group. Thus, our study is a foundation for social and behavioral studies investigating PS preferences.
@InProceedings{Boensch2018c,
author = {Andrea B\"{o}nsch and Sina Radke and Heiko Overath and Laura M. Asch\'{e} and Jonathan Wendt and Tom Vierjahn and Ute Habel and Torsten W. Kuhlen},
title = {{Social VR: How Personal Space is Affected by Virtual Agents’ Emotions}},
booktitle = {Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 2018},
year = {2018}
}
Streaming Live Neuronal Simulation Data into Visualization and Analysis
Neuroscientists want to inspect the data their simulations are producing while these are still running. This will on the one hand save them time waiting for results and therefore insight. On the other, it will allow for more efficient use of CPU time if the simulations are being run on supercomputers. If they had access to the data being generated, neuroscientists could monitor it and take counter-actions, e.g., parameter adjustments, should the simulation deviate too much from in-vivo observations or get stuck.
As a first step toward this goal, we devise an in situ pipeline tailored to the neuroscientific use case. It is capable of recording and transferring simulation data to an analysis/visualization process, while the simulation is still running. The developed libraries are made publicly available as open source projects. We provide a proof-of-concept integration, coupling the neuronal simulator NEST to basic 2D and 3D visualization.
@InProceedings{10.1007/978-3-030-02465-9_18,
author="Oehrl, Simon
and M{\"u}ller, Jan
and Schnathmeier, Jan
and Eppler, Jochen Martin
and Peyser, Alexander
and Plesser, Hans Ekkehard
and Weyers, Benjamin
and Hentschel, Bernd
and Kuhlen, Torsten W.
and Vierjahn, Tom",
editor="Yokota, Rio
and Weiland, Mich{\`e}le
and Shalf, John
and Alam, Sadaf",
title="Streaming Live Neuronal Simulation Data into Visualization and Analysis",
booktitle="High Performance Computing",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="258--272",
abstract="Neuroscientists want to inspect the data their simulations are producing while these are still running. This will on the one hand save them time waiting for results and therefore insight. On the other, it will allow for more efficient use of CPU time if the simulations are being run on supercomputers. If they had access to the data being generated, neuroscientists could monitor it and take counter-actions, e.g., parameter adjustments, should the simulation deviate too much from in-vivo observations or get stuck.",
isbn="978-3-030-02465-9"
}
Interactive Exploration Assistance for Immersive Virtual Environments Based on Object Visibility and Viewpoint Quality
During free exploration of an unknown virtual scene, users often miss important parts, leading to incorrect or incomplete environment knowledge and a potential negative impact on performance in later tasks. This is addressed by wayfinding aids such as compasses, maps, or trails, and automated exploration schemes such as guided tours. However, these approaches either do not actually ensure exploration success or take away control from the user.
Therefore, we present an interactive assistance interface to support exploration that guides users to interesting and unvisited parts of the scene upon request, supplementing their own, free exploration. It is based on an automated analysis of object visibility and viewpoint quality and is therefore applicable to a wide range of scenes without human supervision or manual input. In a user study, we found that the approach improves users' knowledge of the environment, leads to a more complete exploration of the scene, and is also subjectively helpful and easy to use.
Does the Directivity of a Virtual Agent’s Speech Influence the Perceived Social Presence?
When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralizations of their speech. In this work-in progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.
» Show BibTeX
@InProceedings{Boensch2018b,
author = {Jonathan Wendt and Benjamin Weyers and Andrea B\"{o}nsch and Jonas Stienen and Tom Vierjahn and Michael Vorländer and Torsten W. Kuhlen },
title = {{Does the Directivity of a Virtual Agent’s Speech Influence the Perceived Social Presence?}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2018}
}
Dynamic Field of View Reduction Related to Subjective Sickness Measures in an HMD-based Data Analysis Task
Various factors influence the degree of cybersickness a user can suffer in an immersive virtual environment, some of which can be controlled without adapting the virtual environment itself. When using HMDs, one example is the size of the field of view. However, the degree to which factors like this can be manipulated without affecting the user negatively in other ways is limited. Another prominent characteristic of cybersickness is that it affects individuals very differently. Therefore, to account for both the possible disruptive nature of alleviating factors and the high interpersonal variance, a promising approach may be to intervene only in cases where users experience discomfort symptoms, and only as much as necessary. Thus, we conducted a first experiment, where the field of view was decreased when people feel uncomfortable, to evaluate the possible positive impact on sickness and negative influence on presence. While we found no significant evidence for any of these possible effects, interesting further results and observations were made.
» Show BibTeX
@InProceedings{zielasko2018,
title={{Dynamic Field of View Reduction Related to Subjective Sickness Measures in an HMD-based Data Analysis Task}},
author={Zielasko, Daniel and Mei{\ss}ner, Alexander and Freitag Sebastian and Weyers, Benjamin and Kuhlen, Torsten W},
booktitle ={Proc. of IEEE Virtual Reality Workshop on Everyday Virtual Reality},
year={2018}
}
Towards Understanding the Influence of a Virtual Agent’s Emotional Expression on Personal Space
The concept of personal space is a key element of social interactions. As such, it is a recurring subject of investigations in the context of research on proxemics. Using virtual-reality-based experiments, we contribute to this area by evaluating the direct effects of emotional expressions of an approaching virtual agent on an individual’s behavioral and physiological responses. As a pilot study focusing on the emotion expressed solely by facial expressions gave promising results, we now present a study design to gain more insight.
@InProceedings{Boensch2018b,
author = {Andrea B\"{o}nsch and Sina Radke and Jonathan Wendt and Tom Vierjahn and Ute Habel and Torsten W. Kuhlen},
title = {{Towards Understanding the Influence of a Virtual Agent’s Emotional Expression on Personal Space}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2018}
}
Fluid Sketching — Immersive Sketching Based on Fluid Flow
Fluid artwork refers to works of art based on the aesthetics of fluid motion, such as smoke photography, ink injection into water, and paper marbling. Inspired by such types of art, we created Fluid Sketching as a novel medium for creating 3D fluid artwork in immersive virtual environments. It allows artists to draw 3D fluid-like sketches and manipulate them via six degrees of freedom input devices. Different sets of brush strokes are available, varying different characteristics of the fluid. Because of fluid's nature, the diffusion of the drawn fluid sketch is animated, and artists have control over altering the fluid properties and stopping the diffusion process whenever they are satisfied with the current result. Furthermore, they can shape the drawn sketch by directly interacting with it, either with their hand or by blowing into the fluid. We rely on particle advection via curl-noise as a fast procedural method for animating the fluid flow.
» Show BibTeX
@InProceedings{Eroglu2018,
author = {Eroglu, Sevinc and Gebhardt, Sascha and Schmitz, Patric and Rausch, Dominik and Kuhlen, Torsten Wolfgang},
title = {{Fluid Sketching — Immersive Sketching Based on Fluid Flow}},
booktitle = {Proceedings of IEEE Virtual Reality Conference 2018},
year = {2018}
}
Seamless Hand-Based Remote and Close Range Interaction in IVEs
In this work, we describe a hybrid, hand-based interaction metaphor that makes remote and close objects in an HMD-based immersive virtual environment (IVE) seamlessly accessible. To accomplish this, different existing techniques, such as go-go and HOMER, were combined in a way that aims for generality, intuitiveness, uniformity, and speed. A technique like this is one prerequisite for a successful integration of IVEs to professional everyday applications, such as data analysis workflows.
Poster: Complexity Estimation for Feature Tracking Data.
Feature tracking is a method of time-varying data analysis. Due to the complexity of the underlying problem, different feature tracking algorithms have different levels of correctness in certain use cases. However, there is no efficient way to evaluate their performance on simulation data since there is no ground-truth easily obtainable. Synthetic data is a way to ensure a minimum level of correctness, though there are limits to their expressiveness when comparing the results to simulation data. To close this gap, we calculate a synthetic data set and use its results to extract a hypothesis about the algorithm performance that we can apply to simulation data.
@inproceedings{Helmrich2018,
title={Complexity Estimation for Feature Tracking Data.},
author={Helmrich, Dirk N and Schnorr, Andrea and Kuhlen, Torsten W and Hentschel, Bernd},
booktitle={LDAV},
pages={100--101},
year={2018}
}
Talk: Streaming Live Neuronal Simulation Data into Visualization and Analysis
Being able to inspect neuronal network simulations while they are running provides new research strategies to neuroscientists as it enables them to perform actions like parameter adjustments in case the simulation performs unexpectedly. This can also save compute resources when such simulations are run on large supercomputers as errors can be detected and corrected earlier saving valuable compute time. This talk presents a prototypical pipeline that enables in-situ analysis and visualization of running simulations.
Fluid Sketching: Bringing Ebru Art into VR
In this interactive demo, we present our Fluid Sketching application as an innovative virtual reality-based interpretation of traditional marbling art. By using a particle-based simulation combined with natural, spatial, and multi-modal interaction techniques, we create and extend the original artistic work to build a comprehensive interactive experience. With the interactive demo of Fluid Sketching during Mensch und Computer 2018, we aim at increasing the awareness of paper marbling as traditional type of art and demonstrating the potential of virtual reality as new and innovative digital and artistic medium.
@article{eroglu2018fluid,
title={Fluid Sketching: Bringing Ebru Art into VR},
author={Eroglu, Sevinc and Weyers, Benjamin and Kuhlen, Torsten},
journal={Mensch und Computer 2018-Workshopband},
year={2018},
publisher={Gesellschaft f{\"u}r Informatik eV}
}
Talk: Influence of Emotions on Personal Space Preferences
Personal Space (PS) is regulated dynamically by choosing an appropriate interpersonal distance when navigating through social environments. This key element in social interactions is influenced by numerous social and personal characteristics, e.g., the nature of the relationship between the interaction partners and the other’s sex and age. Moreover, affective contexts and expressions of interaction partners influence PS preferences, evident, e.g., in larger distances to others in threatening situations or when confronted with angry-looking individuals. Given the prominent role of emotional expressions in our everyday social interactions, we investigate how emotions affect PS adaptions.
Interactive Exploration of Dissipation Element Geometry
Dissipation elements (DE) define a geometrical structure for the analysis of small-scale turbulence. Existing analyses based on DEs focus on a statistical treatment of large populations of DEs. In this paper, we propose a method for the interactive visualization of the geometrical shape of DE populations. We follow a two-step approach: in a pre-processing step, we approximate individual DEs by tube-like, implicit shapes with elliptical cross sections of varying radii; we then render these approximations by direct ray-casting thereby avoiding the need for costly generation of detailed, explicit geometry for rasterization. Our results demonstrate that the approximation gives a reasonable representation of DE geometries and the rendering performance is suitable for interactive use.
@InProceedings{Vierjahn2017,
booktitle = {Eurographics Symposium on Parallel Graphics and Visualization},
author = {Tom Vierjahn and Andrea Schnorr and Benjamin Weyers and Dominik Denker and Ingo Wald and Christoph Garth and Torsten W. Kuhlen and Bernd Hentschel},
title = {Interactive Exploration of Dissipation Element Geometry},
year = {2017},
pages = {53--62},
ISSN = {1727-348X},
ISBN = {978-3-03868-034-5},
doi = {10.2312/pgv.20171093},
}
Measuring Insight into Multi-dimensional Data from a Combination of a Scatterplot Matrix and a HyperSlice Visualization
Understanding multi-dimensional data and in particular multi-dimensional dependencies is hard. Information visualization can help to understand this type of data. Still, the problem of how users gain insights from such visualizations is not well understood. Both the visualizations and the users play a role in understanding the data. In a case study, using both, a scatterplot matrix and a HyperSlice with six-dimensional data, we asked 16 participants to think aloud and measured insights during the process of analyzing the data. The amount of insights was strongly correlated with spatial abilities. Interestingly, all users were able to complete an optimization task independently of self-reported understanding of the data.
@Inbook{CaleroValdez2017,
author="Calero Valdez, Andr{\'e}
and Gebhardt, Sascha
and Kuhlen, Torsten W.
and Ziefle, Martina",
editor="Duffy, Vincent G.",
title="Measuring Insight into Multi-dimensional Data from a Combination of a Scatterplot Matrix and a HyperSlice Visualization",
bookTitle="Digital Human Modeling. Applications in Health, Safety, Ergonomics, and Risk Management: Health and Safety: 8th International Conference, DHM 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9-14, 2017, Proceedings, Part II",
year="2017",
publisher="Springer International Publishing",
address="Cham",
pages="225--236",
isbn="978-3-319-58466-9",
doi="10.1007/978-3-319-58466-9_21",
url="http://dx.doi.org/10.1007/978-3-319-58466-9_21"
}
Interactive Level-of-Detail Visualization of 3D-Polarized Light Imaging Data Using Spherical Harmonics
3D-Polarized Light Imaging (3D-PLI) provides data that enables an exploration of brain fibers at very high resolution. However, the visualization poses several challenges. Beside the huge data set sizes, users have to visually perceive the pure amount of information which might be, among other aspects, inhibited for inner structures because of occlusion by outer layers of the brain. We propose a clustering of fiber directions by means of spherical harmonics using a level-of-detail structure by which the user can interactively choose a clustering degree according to the zoom level or details required. Furthermore, the clustering method can be used for the automatic grouping of similar spherical harmonics automatically into one representative. An optional overlay with a direct vector visualization of the 3D-PLI data provides a better anatomical context.
Honorable Mention for Best Short Paper!
@inproceedings {Haenel2017Interactive,
booktitle = {EuroVis 2017 - Short Papers},
editor = {Barbora Kozlikova and Tobias Schreck and Thomas Wischgoll},
title = {{Interactive Level-of-Detail Visualization of 3D-Polarized Light Imaging Data Using Spherical Harmonics}},
author = {H\”anel, Claudia and Demiralp, Ali C. and Axer, Markus and Gr\”assel, David and Hentschel, Bernd and Kuhlen, Torsten W.},
year = {2017},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-043-7},
DOI = {10.2312/eurovisshort.20171145}
}
Comparison of a speech-based and a pie-menu-based interaction metaphor for application control
Choosing an adequate system control technique is crucial to support complex interaction scenarios in virtual reality applications. In this work, we compare an existing hierarchical pie-menu-based approach with a speech-recognition-based one in terms of task performance and user experience in a formal user study. As testbed, we use a factory planning application featuring a large set of system control options.
@INPROCEEDINGS{Pick:691795,
author = {Pick, Sebastian and Puika, Andrew S. and Kuhlen, Torsten},
title = {{C}omparison of a speech-based and a pie-menu-based
interaction metaphor for application control},
address = {Piscataway, NJ},
publisher = {IEEE},
reportid = {RWTH-2017-06169},
pages = {381-382},
year = {2017},
comment = {2017 IEEE Virtual Reality (VR) : proceedings : March 18-22,
2017, Los Angeles, CA, USA / Evan Suma Rosenberg, David M.
Krum, Zachary Wartell, Betty Mohler, Sabarish V. Babu, Frank
Steinicke, and Victoria Interrante ; sponsored by IEEE
Computer Society, Visialization and Graphics Technical
Committee},
booktitle = {2017 IEEE Virtual Reality (VR) :
proceedings : March 18-22, 2017, Los
Angeles, CA, USA / Evan Suma Rosenberg,
David M. Krum, Zachary Wartell, Betty
Mohler, Sabarish V. Babu, Frank
Steinicke, and Victoria Interrante ;
sponsored by IEEE Computer Society,
Visialization and Graphics Technical
Committee},
month = {Mar},
date = {2017-03-18},
organization = {2017 IEEE Virtual Reality, Los
Angeles, CA (USA), 18 Mar 2017 - 22 Mar
2017},
cin = {124620 / 120000 / 080025},
cid = {$I:(DE-82)124620_20151124$ / $I:(DE-82)120000_20140620$ /
$I:(DE-82)080025_20140620$},
pnm = {B-1 - Virtual Production Intelligence},
pid = {G:(DE-82)X080025-B-1},
typ = {PUB:(DE-HGF)7 / PUB:(DE-HGF)8},
UT = {WOS:000403149400114},
doi = {10.1109/VR.2017.7892336},
url = {http://publications.rwth-aachen.de/record/691795},
}
buenoSDIAs: Supporting Desktop Immersive Analytics While Actively Preventing Cybersickness
Immersive data analytics as an emerging research topic in scientific and information visualization has recently been brought back into the focus due to the emergence of low-cost consumer virtual reality hardware. Previous research has shown the positive impact of immersive visualization on data analytics workflows, but in most cases, insights were based on large-screen setups. In contrast, less research focuses on a close integration of immersive technology into existing, i.e., desktop-based data analytics workflows. This implies specific requirements regarding the usability of such systems, which include, i.e., the prevention of cybersickness. In this work, we present a prototypical application, which offers a first set of tools and addresses major challenges for a fully immersive data analytics setting in which the user is sitting at a desktop. In particular, we address the problem of cybersickness by integrating prevention strategies combined with individualized user profiles to maximize time of use.
Utilizing Immersive Virtual Reality in Everyday Work
Applications of Virtual Reality (VR) have been repeatedly explored with the goal to improve the data analysis process of users from different application domains, such as architecture and simulation sciences. Unfortunately, making VR available in professional application scenarios or even using it on a regular basis has proven to be challenging. We argue that everyday usage environments, such as office spaces, have introduced constraints that critically affect the design of interaction concepts since well-established techniques might be difficult to use. In our opinion, it is crucial to understand the impact of usage scenarios on interaction design, to successfully develop VR applications for everyday use. To substantiate our claim, we define three distinct usage scenarios in this work that primarily differ in the amount of mobility they allow for. We outline each scenario's inherent constraints but also point out opportunities that may be used to design novel, well-suited interaction techniques for different everyday usage environments. In addition, we link each scenario to a concrete application example to clarify its relevance and show how it affects interaction design.
Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis
Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on.
Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study.
» Show BibTeX
@INPROCEEDINGS{freitag2017a,
author={Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen},
booktitle={2017 IEEE Symposium on 3D User Interfaces (3DUI)},
title={{Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}},
year={2017},
pages={134--143},
}
Approximating Optimal Sets of Views in Virtual Scenes
Viewpoint quality estimation methods allow the determination of the most informative position in a scene. However, a single position usually cannot represent an entire scene, requiring instead a set of several viewpoints. Measuring the quality of such a set of views, however, is not trivial, and the computation of an optimal set of views is an NP-hard problem. Therefore, in this work, we propose three methods to estimate the quality of a set of views. Furthermore, we evaluate three approaches for computing an approximation to the optimal set (two of them new) regarding effectiveness and efficiency.
Assisted Travel Based on Common Visibility and Navigation Meshes
The manual adjustment of travel speed to cover medium or large distances in virtual environments may increase cognitive load, and manual travel at high speeds can lead to cybersickness due to inaccurate steering. In this work, we present an approach to quickly pass regions where the environment does not change much, using automated suggestions based on the computation of common visibility. In a user study, we show that our method can reduce cybersickness when compared with manual speed control.
BlowClick 2.0: A Trigger Based on Non-Verbal Vocal Input
The use of non-verbal vocal input (NVVI) as a hand-free trigger approach has proven to be valuable in previous work [Zielasko2015]. Nevertheless, BlowClick's original detection method is vulnerable to false positives and, thus, is limited in its potential use, e.g., together with acoustic feedback for the trigger. Therefore, we extend the existing approach by adding common machine learning methods. We found that a support vector machine (SVM) with Gaussian kernel performs best for detecting blowing with at least the same latency and more precision as before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user's confidence. To evaluate the advanced trigger technique, we conducted a user study (n=33). The results confirm that it is a reliable trigger; alone and as part of a hands-free point-and-click interface.
A Reliable Non-Verbal Vocal Input Metaphor for Clicking
We extended BlowClick, a NVVI metaphor for clicking, by adding machine learning methods to more reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user's confidence. With this extended technique we conducted a user study with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-and-click interface.
Remain Seated: Towards Fully-Immersive Desktop VR
In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability.
This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated.
@InProceedings{Boensch2017b,
Title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments},
Author = {Andrea B\"{o}nsch and Tom Vierjahn and Torsten W. Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces},
Year = {2017},
Pages = {69-72}
}
Gistualizer: An Immersive Glyph for Multidimensional Datapoints
Data from diverse workflows is often too complex for an adequate analysis without visualization. One kind of data are multi-dimensional datasets, which can be visualized via a wide array of techniques. For instance, glyphs can be used to visualize individual datapoints. However, glyphs need to be actively looked at to be comprehended. This work explores a novel approach towards visualizing a single datapoint, with the intention of increasing the user’s awareness of it while they are looking at something else. The basic concept is to represent this point by a scene that surrounds the user in an immersive virtual environment. This idea is based on the observation that humans can extract low-detailed information, the so-called gist, from a scene nearly instantly (equal or less 100ms). We aim at providing a first step towards answering the question whether enough information can be encoded in the gist of a scene to represent a point in multi-dimensional space and if this information is helpful to the user’s understanding of this space.
@inproceedings{Bellgardt2017,
author = {Bellgardt, Martin and Gebhardt, Sascha and Hentschel, Bernd and Kuhlen, Torsten W.},
booktitle = {Workshop on Immersive Analytics},
title = {{Gistualizer: An Immersive Glyph for Multidimensional Datapoints}},
year = {2017}
}
Turning Anonymous Members of a Multiagent System into Individuals
It is increasingly common to embed embodied, human-like, virtual agents into immersive virtual environments for either of the two use cases: (1) populating architectural scenes as anonymous members of a crowd and (2) meeting or supporting users as individual, intelligent and conversational agents. However, the new trend towards intelligent cyber physical systems inherently combines both use cases. Thus, we argue for the necessity of multiagent systems consisting of anonymous and autonomous agents, who temporarily turn into intelligent individuals. Besides purely enlivening the scene, each agent can thus be engaged into a situation-dependent interaction by the user, e.g., into a conversation or a joint task. To this end, we devise components for an agent’s behavioral design modeling the transition between an anonymous and an individual agent when a user approaches.
@InProceedings{Boensch2017c,
Title = {{Turning Anonymous Members of a Multiagent System into Individuals}},
Author = {Andrea B\"{o}nsch, Tom Vierjahn, Ari Shapiro and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments},
Year = {2017},
Keywords = {Virtual Humans; Virtual Reality; Intelligent Agents; Mutliagent System},
DOI ={ 10.1109/VHCIE.2017.7935620}
Owner = {ab280112},
Timestamp = {2017.02.28}
}
Poster: Score-Based Recommendation for Efficiently Selecting Individual Virtual Agents in Multi-Agent Systems
Controlling user-agent-interactions by means of an external operator includes selecting the virtual interaction partners fast and faultlessly. However, especially in immersive scenes with a large number of potential partners, this task is non-trivial.
Thus, we present a score-based recommendation system supporting an operator in the selection task. Agents are recommended as potential partners based on two parameters: the user’s distance to the agents and the user’s gazing direction. An additional graphical user interface (GUI) provides elements for configuring the system and for applying actions to those agents which the operator has confirmed as interaction partners.
@InProceedings{Boensch2017d,
Title = {Score-Based Recommendation for Efficiently Selecting Individual
Virtual Agents in Multi-Agent Systems},
Author = {Andrea Bönsch and Robert Trisnadi and Jonathan Wendt and Tom Vierjahn, and Torsten
W. Kuhlen},
Booktitle = {Proceedings of 23rd ACM
Symposium on Virtual Reality Software and Technology},
Year = {2017},
Pages = {tba},
DOI={10.1145/3139131.3141215}
}
Poster: Towards a Design Space Characterizing Workflows that Take Advantage of Immersive Visualization
Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for seamlessly integrating IV can be derived. We validate the design space with three workflows emerging from our research projects.
@InProceedings{Vierjahn2017,
Title = {Towards a Design Space Characterizing Workflows that Take Advantage of Immersive Visualization},
Author = {Tom Vierjahn and Daniel Zielasko and Kees van Kooten and Peter Messmer and Bernd Hentschel and Torsten W. Kuhlen and Benjamin Weyers},
Booktitle = {IEEE Virtual Reality Conference Poster Proceedings},
Year = {2017},
Pages = {329-330},
DOI={10.1109/VR.2017.7892310}
}
Poster: Peers At Work: Economic Real-Effort Experiments In The Presence of Virtual Co-Workers
Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom?
To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments.
@InProceedings{Boensch2017a,
Title = {Peers At Work: Economic Real-Effort Experiments In The Presence of Virtual Co-Workers},
Author = {Andrea B\"{o}nsch and Jonathan Wendt and Heiko Overath and Özgür Gürerk and Christine Harbring and Christian Grund and Thomas Kittsteiner and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Reality Conference Poster Proceedings},
Year = {2017},
Pages = {301-302},
DOI = {10.1109/VR.2017.7892296}
}
Virtual Production Intelligence
The research area Virtual Production Intelligence (VPI) focuses on the integrated support of collaborative planning processes for production systems and products. The focus of the research is on processes for information processing in the design domains Factory and Machine. These processes provide the integration and interactive analysis of emerging, mostly heterogeneous planning information. The demonstrators (flapAssist, memoSlice und VPI platform) that are Information systems serve for the validation of the scientific approaches and aim to realize a continuous and consistent information management in terms of the Digital Factory. Central challenges are the semantic information integration (e.g., by means of metamodelling), the subsequent evaluation as well as the visualization of planning information (e.g., by means of Visual Analytics and Virtual Reality). All scientific and technical work is done within an interdisciplinary team composed of engineers, computer scientists and physicists.
@BOOK{Brecher:683508,
key = {683508},
editor = {Brecher, Christian and Özdemir, Denis},
title = {{I}ntegrative {P}roduction {T}echnology : {T}heory and
{A}pplications},
address = {Cham},
publisher = {Springer International Publishing},
reportid = {RWTH-2017-01369},
isbn = {978-3-319-47451-9},
pages = {XXXIX, 1100 Seiten : Illustrationen},
year = {2017},
cin = {417310 / 080025},
cid = {$I:(DE-82)417310_20140620$ / $I:(DE-82)080025_20140620$},
typ = {PUB:(DE-HGF)3},
doi = {10.1007/978-3-319-47452-6},
url = {http://publications.rwth-aachen.de/record/683508},
}
Do Not Invade: A Virtual-Reality-Framework to Study Personal Space
The bachelor thesis’ aim was to develop a framework allowing to design and conduct virtual-reality-based user studies gaining insight into the concept of personal space.
@Article{Schnathmeier2017,
Title = {Do Not Invade: A Virtual-Reality-Framework to Study Personal Space},
Author = {Jan Schnathmeier and Heiko Overath and Sina Radke and Andrea B\"{o}nsch and Ute Habel and Torsten W. Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 14. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2017},
Pages = {203-204},
ISBN = {978-3-8440-5606-8}
Publisher = {Shaker Verlag}
}
Accurate and adaptive contact modeling for multi-rate multi-point haptic rendering of static and deformable environments
Common approaches for the haptic rendering of complex scenarios employ multi-rate simulation schemes. Here, the collision queries or the simulation of a complex deformable object are often performed asynchronously at a lower frequency, while some kind of intermediate contact representation is used to simulate interactions at the haptic rate. However, this can produce artifacts in the haptic rendering when the contact situation quickly changes and the intermediate representation is not able to reflect the changes due to the lower update rate.
We address this problem utilizing a novel contact model. It facilitates the creation of contact representations that are accurate for a large range of motions and multiple simulation time-steps. We handle problematic geometrically convex contact regions using a local convex decomposition and special constraints for convex areas. We combine our accurate contact model with an implicit temporal integration scheme to create an intermediate mechanical contact representation, which reflects the dynamic behavior of the simulated objects. To maintain a haptic real time simulation, the size of the region modeled by the contact representation is automatically adapted to the complexity of the geometry in contact. Moreover, we propose a new iterative solving scheme for the involved constrained dynamics problems. We increase the robustness of our method using techniques from trust region-based optimization. Our approach can be combined with standard methods for the modeling of deformable objects or constraint-based approaches for the modeling of, for instance, friction or joints. We demonstrate its benefits with respect to the simulation accuracy and the quality of the rendered haptic forces in several scenarios with one or more haptic proxies.
@Article{Knott201668,
Title = {Accurate and adaptive contact modeling for multi-rate multi-point haptic rendering of static and deformable environments },
Author = {Thomas C. Knott and Torsten W. Kuhlen},
Journal = {Computers \& Graphics },
Year = {2016},
Pages = {68 - 80},
Volume = {57},
Abstract = {Abstract Common approaches for the haptic rendering of complex scenarios employ multi-rate simulation schemes. Here, the collision queries or the simulation of a complex deformable object are often performed asynchronously at a lower frequency, while some kind of intermediate contact representation is used to simulate interactions at the haptic rate. However, this can produce artifacts in the haptic rendering when the contact situation quickly changes and the intermediate representation is not able to reflect the changes due to the lower update rate. We address this problem utilizing a novel contact model. It facilitates the creation of contact representations that are accurate for a large range of motions and multiple simulation time-steps. We handle problematic geometrically convex contact regions using a local convex decomposition and special constraints for convex areas. We combine our accurate contact model with an implicit temporal integration scheme to create an intermediate mechanical contact representation, which reflects the dynamic behavior of the simulated objects. To maintain a haptic real time simulation, the size of the region modeled by the contact representation is automatically adapted to the complexity of the geometry in contact. Moreover, we propose a new iterative solving scheme for the involved constrained dynamics problems. We increase the robustness of our method using techniques from trust region-based optimization. Our approach can be combined with standard methods for the modeling of deformable objects or constraint-based approaches for the modeling of, for instance, friction or joints. We demonstrate its benefits with respect to the simulation accuracy and the quality of the rendered haptic forces in several scenarios with one or more haptic proxies. },
Doi = {http://dx.doi.org/10.1016/j.cag.2016.03.007},
ISSN = {0097-8493},
Keywords = {Haptic rendering},
Url = {http://www.sciencedirect.com/science/article/pii/S0097849316300206}
}
Interactive 3D Force-Directed Edge Bundling
Interactive analysis of 3D relational data is challenging. A common way of representing such data are node-link diagrams as they support analysts in achieving a mental model of the data. However, naïve 3D depictions of complex graphs tend to be visually cluttered, even more than in a 2D layout. This makes graph exploration and data analysis less efficient. This problem can be addressed by edge bundling. We introduce a 3D cluster-based edge bundling algorithm that is inspired by the force-directed edge bundling (FDEB) algorithm [Holten2009] and fulfills the requirements to be embedded in an interactive framework for spatial data analysis. It is parallelized and scales with the size of the graph regarding the runtime. Furthermore, it maintains the edge’s model and thus supports rendering the graph in different structural styles. We demonstrate this with a graph originating from a simulation of the function of a macaque brain.
Visual Quality Adjustment for Volume Rendering in a Head-Tracked Virtual Environment
To avoid simulator sickness and improve presence in immersive virtual environments (IVEs), high frame rates and low latency are required. In contrast, volume rendering applications typically strive for high visual quality that induces high computational load and, thus, leads to low frame rates. To evaluate this trade-off in IVEs, we conducted a controlled user study with 53 participants. Search and count tasks were performed in a CAVE with varying volume rendering conditions which are applied according to viewer position updates corresponding to head tracking. The results of our study indicate that participants preferred the rendering condition with continuous adjustment of the visual quality over an instantaneous adjustment which guaranteed for low latency and over no adjustment providing constant high visual quality but rather low frame rates. Within the continuous condition, the participants showed best task performance and felt less disturbed by effects of the visualization during movements. Our findings provide a good basis for further evaluations of how to accelerate volume rendering in IVEs according to user’s preferences.
@article{Hanel2016,
author = { H{\"{a}}nel, Claudia and Weyers, Benjamin and Hentschel, Bernd and Kuhlen, Torsten W.},
doi = {10.1109/TVCG.2016.2518338},
issn = {10772626},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {4},
pages = {1472--1481},
pmid = {26780811},
title = {{Visual Quality Adjustment for Volume Rendering in a Head-Tracked Virtual Environment}},
volume = {22},
year = {2016}
}
Examining Rotation Gain in CAVE-like Virtual Environments
When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.
@ARTICLE{freitag2016a,
author={S. Freitag and B. Weyers and T. W. Kuhlen},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={{Examining Rotation Gain in CAVE-like Virtual Environments}},
year={2016},
volume={22},
number={4},
pages={1462-1471},
doi={10.1109/TVCG.2016.2518298},
ISSN={1077-2626},
month={April},
}
Design and Evaluation of Data Annotation Workflows for CAVE-like Virtual Environments
Data annotation finds increasing use in Virtual Reality applications with the goal to support the data analysis process, such as architectural reviews. In this context, a variety of different annotation systems for application to immersive virtual environments have been presented. While many interesting interaction designs for the data annotation workflow have emerged from them, important details and evaluations are often omitted. In particular, we observe that the process of handling metadata to interactively create and manage complex annotations is often not covered in detail. In this paper, we strive to improve this situation by focusing on the design of data annotation workflows and their evaluation. We propose a workflow design that facilitates the most important annotation operations, i.e., annotation creation, review, and modification. Our workflow design is easily extensible in terms of supported annotation and metadata types as well as interaction techniques, which makes it suitable for a variety of application scenarios. To evaluate it, we have conducted a user study in a CAVE-like virtual environment in which we compared our design to two alternatives in terms of a realistic annotation creation task. Our design obtained good results in terms of task performance and user experience.
Towards the Ultimate Display for Neuroscientific Data Analysis
This article wants to give some impulses for a discussion about how an “ultimate” display should look like to support the Neuroscience community in an optimal way. In particular, we will have a look at immersive display technology. Since its hype in the early 90’s, immersive Virtual Reality has undoubtedly been adopted as a useful tool in a variety of application domains and has indeed proven its potential to support the process of scientific data analysis. Yet, it is still an open question whether or not such non-standard displays make sense in the context of neuroscientific data analysis. We argue that the potential of immersive displays is neither about the raw pixel count only, nor about other hardware-centric characteristics. Instead, we advocate the design of intuitive and powerful user interfaces for a direct interaction with the data, which support the multi-view paradigm in an efficient and flexible way, and – finally – provide interactive response times even for huge amounts of data and when dealing multiple datasets simultaneously.
@InBook{Kuhlen2016,
Title = {Towards the Ultimate Display for Neuroscientific Data Analysis},
Author = {Kuhlen, Torsten Wolfgang and Hentschel, Bernd},
Editor = {Amunts, Katrin and Grandinetti, Lucio and Lippert, Thomas and Petkov, Nicolai},
Pages = {157--168},
Publisher = {Springer International Publishing},
Year = {2016},
Address = {Cham},
Booktitle = {Brain-Inspired Computing: Second International Workshop, BrainComp 2015, Cetraro, Italy, July 6-10, 2015, Revised Selected Papers},
Doi = {10.1007/978-3-319-50862-7_12},
ISBN = {978-3-319-50862-7},
Url = {http://dx.doi.org/10.1007/978-3-319-50862-7_12}
}
Human Factors in Information Visualization and Decision Support Systems
With the increase in data availability and data volume it becomes increasingly important to extract information and actionable knowledge from data. Information Visualization helps the user to understand data by utilizing vision as a relatively parallel input channel to the user’s mind. Decision Support systems on the other hand help users in making information actionable, by suggesting beneficial decisions and presenting them in context. Both fields share a common need for understanding the interface between the computer and the human. This makes human factors research critical for both fields. Understanding limitations of human perception, cognition and action, as well as their variance must be understood to fully leverage information visualization and decision support. This article reflects on research agendas for investigating human factors in the aforementioned fields.
@inproceedings{Valdez2016,
author = {Valdez, André Calero AND Brauner, Philipp AND Ziefle, Martina AND Kuhlen, Torsten Wolfgang AND Sedlmair, Michael},
title = {Human Factors in Information Visualization and Decision Support Systems},
booktitle = {Mensch und Computer 2016 – Workshopband},
year = {2016},
editor = {Weyers, Benjamin AND Dittmar, Anke},
pages = {},
publisher = {Gesellschaft für Informatik e.V.},
address = {Aachen}
}
Towards Multi-user Provenance Tracking of Visual Analysis Workflows over Multiple Applications
Provenance tracking for visual analysis workflows is still a challenge as especially interaction and collaboration aspects are poorly covered in existing realizations. Therefore, we propose a first prototype addressing these issues based on the PROV model. Interactions in multiple applications by multiple users can be tracked by means of a web interface and, thus, allowing even for tracking of remote-located collaboration partners. In the end, we demonstrate the applicability based on two use cases and discuss some open issues not addressed by our implementation so far but that can be easily integrated into our architecture.
@inproceedings {eurorv3.20161112,
booktitle = {EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {Kai Lawonn and Mario Hlawitschka and Paul Rosenthal},
title = {{Towards Multi-user Provenance Tracking of Visual Analysis Workflows over Multiple Applications}},
author = { H{\"{a}}nel, Claudia and Khatami, Mohammad and Kuhlen, Torsten W. and Weyers, Benjamin},
year = {2016},
publisher = {The Eurographics Association},
ISSN = {-},
ISBN = {978-3-03868-017-8},
DOI = {10.2312/eurorv3.20161112}
}
A Lightweight Electrotactile Feedback Device to Improve Grasping in Immersive Virtual Environments
An immersive virtual environment is the ideal platform for the planning and training of on-orbit servicing missions. In such kind of virtual assembly simulation, grasping virtual objects is one of the most common and natural interactions. In this paper, we present a novel, small and lightweight electrotactile feedback device, specifically designed for immersive virtual environments. We conducted a study to assess the feasibility and usability of our interaction device. Results show that electrotactile feedback improved the user’s grasping in our virtual on-orbit servicing scenario. The task completion time was significantly lower and the precision of the user’s interaction was higher.
Collision Avoidance in the Presence of a Virtual Agent in Small-Scale Virtual Environments
Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet.
Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion.
Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.
Honorable Mention for Best Technote!
» Show BibTeX
@InProceedings{Boensch2016a,
Title = {Collision Avoidance in the Presence of a Virtual Agent in Small-Scale Virtual Environments},
Author = {Andrea B\"{o}nsch and Benjamin Weyers and Jonathan Wendt and Sebastian Freitag and Torsten W. Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces},
Year = {2016},
Pages = {145-148},
Abstract = {Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain
their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled
user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Theirwaywas blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion.
Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.}
}
Automatic Speed Adjustment for Travel through Immersive Virtual Environments based on Viewpoint Quality
When traveling virtually through large scenes, long distances and different detail densities render fixed movement speeds impractical. However, to manually adjust the travel speed, users have to control an additional parameter, which may be uncomfortable and requires cognitive effort. Although automatic speed adjustment techniques exist, many of them can be problematic in indoor scenes. Therefore, we propose to automatically adjust travel speed based on viewpoint quality, originally a measure of the informativeness of a viewpoint. In a user study, we show that our technique is easy to use, allowing users to reach targets faster and use less cognitive resources than when choosing their speed manually.
Best Technote!
@INPROCEEDINGS{freitag2016b,
author={S. Freitag and B. Weyers and T. W. Kuhlen},
booktitle={2016 IEEE Symposium on 3D User Interfaces (3DUI)},
title={{Automatic Speed Adjustment for Travel through Immersive Virtual Environments based on Viewpoint Quality}},
year={2016},
pages={67-70},
doi={10.1109/3DUI.2016.7460033},
month={March},
}
SWIFTER: Design and Evaluation of a Speech-based Text Input Metaphor for Immersive Virtual Environments
Text input is an important part of the data annotation process, where text is used to capture ideas and comments. For text entry in immersive virtual environments, for which standard keyboards usually do not work, various approaches have been proposed. While these solutions have mostly proven effective, there still remain certain shortcomings making further investigations worthwhile. Motivated by recent research, we propose the speech-based multimodal text entry system SWIFTER, which strives for simplicity while maintaining good performance. In an initial user study, we compared our approach to smartphone-based text entry within a CAVE-like virtual environment. Results indicate that SWIFTER reaches an average input rate of 23.6 words per minute and is positively received by users in terms of user experience.
Evaluation of Hands-Free HMD-Based Navigation Techniques for Immersive Data Analysis
To use the full potential of immersive data analysis when wearing a head-mounted display, users have to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst’s usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best. In the given task the participants had to determine the shortest path between various pairs of vertices in a large 3D graph.
Interactive Simulation of Aircraft Noise in Aural and Visual Virtual Environments
This paper describes a novel aircraft noise simulation technique developed at RWTH Aachen University, which makes use of aircraft noise auralization and 3D visualization to make aircraft noise both heard and seen in immersive Virtual Reality (VR) environments. This technique is intended to be used to increase the residents’ acceptance of aircraft noise by presenting noise changes in a more directly relatable form, and also aid in understanding what contributes to the residents’ subjective annoyance via psychoacoustic surveys. This paper describes the technique as well as some of its initial applications. The reasoning behind the development of such a technique is that the issue of aircraft noise experienced by residents in airport vicinities is one of subjective annoyance. Any efforts at noise abatement have been conventionally presented to residents in terms of noise level reductions in conventional metrics such as A-weighted level or equivalent sound level Leq. This conventional approach however proves insufficient in increasing aircraft noise acceptance due to two main reasons – firstly, the residents have only a rudimentary understanding of changes in decibel and secondly, the conventional metrics do not fully capture what the residents actually find annoying i.e. characteristics of aircraft noise they find least acceptable. In order to allow least resistance to air-traffic expansion, the acceptance of aircraft noise has to be increased, for which such a new approach to noise assessment is required.
Vista Widgets: A Framework for Designing 3D User Interfaces from Reusable Interaction Building Blocks
Virtual Reality (VR) has been an active field of research for several decades, with 3D interaction and 3D User Interfaces (UIs) as important sub-disciplines. However, the development of 3D interaction techniques and in particular combining several of them to construct complex and usable 3D UIs remains challenging, especially in a VR context. In addition, there is currently only limited reusable software for implementing such techniques in comparison to traditional 2D UIs. To overcome this issue, we present ViSTA Widgets, a software framework for creating 3D UIs for immersive virtual environments. It extends the ViSTA VR framework by providing functionality to create multi-device, multi-focus-strategy interaction building blocks and means to easily combine them into complex 3D UIs. This is realized by introducing a device abstraction layer along sophisticated focus management and functionality to create novel 3D interaction techniques and 3D widgets. We present the framework and illustrate its effectiveness with code and application examples accompanied by performance evaluations.
@InProceedings{Gebhardt2016,
Title = {{Vista Widgets: A Framework for Designing 3D User Interfaces from Reusable Interaction Building Blocks}},
Author = {Gebhardt, Sascha and Petersen-Krau, Till and Pick, Sebastian and Rausch, Dominik and Nowke, Christian and Knott, Thomas and Schmitz, Patric and Zielasko, Daniel and Hentschel, Bernd and Kuhlen, Torsten W.},
Booktitle = {Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology},
Year = {2016},
Address = {New York, NY, USA},
Pages = {251--260},
Publisher = {ACM},
Series = {VRST '16},
Acmid = {2993382},
Doi = {10.1145/2993369.2993382},
ISBN = {978-1-4503-4491-3},
Keywords = {3D interaction, 3D user interfaces, framework, multi-device, virtual reality},
Location = {Munich, Germany},
Numpages = {10},
Url = {http://doi.acm.org/10.1145/2993369.2993382}
}
Experiences on Validation of Multi-Component System Simulations for Medical Training Applications
In the simulation of multi-component systems, we often encounter a problem with a lack of ground-truth data. This situation makes the validation of our simulation methods and models a difficult task. In this work we present a guideline to design validation methodologies that can be applied to the validation of multi-component simulations that lack of ground-truth data. Additionally we present an example applied to an Ultrasound Image Simulation for medical training and give an overview of the considerations made and the results for each of the validation methods. With these guidelines we expect to obtain more comparable and reproducible validation results from which other similar work can benefit.
@InProceedings{eurorv3.20161113,
author = {Law, Yuen C. and Weyers, Benjamin and Kuhlen, Torsten W.},
title = {{Experiences on Validation of Multi-Component System Simulations for Medical Training Applications}},
booktitle = {EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
year = {2016},
editor = {Kai Lawonn and Mario Hlawitschka and Paul Rosenthal},
publisher = {The Eurographics Association},
doi = {10.2312/eurorv3.20161113},
isbn = {978-3-03868-017-8},
pages = {29--33}
}
Visualizing Performance Data With Respect to the Simulated Geometry
Understanding the performance behaviour of high-performance computing (hpc) applications based on performance profiles is a challenging task. Phenomena in the performance behaviour can stem from the hpc system itself, from the application’s code, but also from the simulation domain. In order to analyse the latter phenomena, we propose a system that visualizes profile-based performance data in its spatial context in the simulation domain, i.e., on the geometry processed by the application. It thus helps hpc experts and simulation experts understand the performance data better. Furthermore, it reduces the initially large search space by automatically labeling those parts of the data that reveal variation in performance and thus require detailed analysis.
@inproceedings{VIERJAHN-2016-02,
Author = {Vierjahn, Tom and Kuhlen, Torsten W. and M\"{u}ller, Matthias S. and Hentschel, Bernd},
Booktitle = {JARA-HPC Symposium (accepted for publication)},
Title = {Visualizing Performance Data With Respect to the Simulated Geometry},
Year = {2016}}
Using Directed Variance to Identify Meaningful Views in Call-Path Performance Profiles
Understanding the performance behaviour of massively parallel high-performance computing (HPC) applications based on call-path performance profiles is a time-consuming task. In this paper, we introduce the concept of directed variance in order to help analysts find performance bottlenecks in massive performance data and in the end optimize the application. According to HPC experts’ requirements, our technique automatically detects severe parts in the data that expose large variation in an application’s performance behaviour across system resources. Previously known variations are effectively filtered out. Analysts are thus guided through a reduced search space towards regions of interest for detailed examination in a 3D visualization. We demonstrate the effectiveness of our approach using performance data of common benchmark codes as well as from actively developed production codes.
@inproceedings{VIERJAHN-2016-04,
Author = {Vierjahn, Tom and Hermanns, Marc-Andr\'{e} and Mohr, Bernd and M\"{u}ller, Matthias S. and Kuhlen, Torsten W. and Hentschel, Bernd},
Booktitle = {3rd Workshop Visual Performance Analysis (to appear)},
Title = {Using Directed Variance to Identify Meaningful Views in Call-Path Performance Profiles},
Year = {2016}}
Poster: Correlating Sub-Phenomena in Performance Data in the Frequency Domain
Finding and understanding correlated performance behaviour of the individual functions of massively parallel high-performance computing (HPC) applications is a time-consuming task. In this poster, we propose filtered correlation analysis for automatically locating interdependencies in call-path performance profiles. Transforming the data into the frequency domain splits a performance phenomenon into sub-phenomena to be correlated separately. We provide the mathematical framework and an overview over the visualization, and we demonstrate the effectiveness of our technique.
Best Poster Award!
@inproceedings{Vierjahn-2016-03,
Author = {Vierjahn, Tom and Hermanns, Marc-Andr\'{e} and Mohr, Bernd and M\"{u}ller, Matthias S. and Kuhlen, Torsten W. and Hentschel, Bernd},
Booktitle = {LDAV 2016 -- Posters (accepted)},
Date-Added = {2016-08-31 22:14:47 +0000},
Date-Modified = {2016-08-31 22:15:58 +0000},
Title = {Correlating Sub-Phenomena in Performance Data in the Frequency Domain}
}
Poster: Evaluating Presence Strategies of Temporarily Required Virtual Assistants
Computer-controlled virtual humans can serve as assistants in virtual scenes. Here, they are usually in an almost constant contact with the user. Nonetheless, in some applications assistants are required only temporarily. Consequently, presenting them only when needed, i.e, minimizing their presence time, might be advisable.
To the best of our knowledge, there do not yet exist any design guidelines for such agent-based support systems. Thus, we plan to close this gap by a controlled qualitative and quantitative user study in a CAVE-like environment.We expect users to prefer assistants with a low presence time as well as a low fallback time to get quick support. However, as both factors are linked, a suitable trade-off needs to be found. Thus, we plan to test four different strategies, namely fading, moving, omnipresent and busy. This work presents our hypotheses and our planned within-subject design.
@InBook{Boensch2016c,
Title = {Evaluating Presence Strategies of Temporarily Required Virtual Assistants},
Author = {Andrea B\"{o}nsch and Tom Vierjahn and Torsten W. Kuhlen},
Pages = {387 - 391},
Publisher = {Springer International Publishing},
Year = {2016},
Month = {September},
Abstract = {Computer-controlled virtual humans can serve as assistants in virtual scenes. Here, they are usually in an almost constant contact with the user. Nonetheless, in some applications assistants are required only
temporarily. Consequently, presenting them only when needed, i.e., minimizing their presence time, might be advisable.
To the best of our knowledge, there do not yet exist any design guidelines for such agent-based support systems. Thus, we plan to close this gap by a controlled qualitative and quantitative user study in a CAVE-like environment. We expect users to prefer assistants with a low presence time as well as a low fallback time to get quick support. However, as both factors are linked, a suitable trade-off needs to be found. Thus, we p lan to test four different strategies, namely fading, moving, omnipresent and busy. This work presents our hypotheses and our planned within-subject design.},
Booktitle = {Intelligent Virtual Agents: 16th International Conference, IVA 2016. Proceedings},
Doi = {10.1007/978-3-319-47665-0_39},
Keywords = {Virtual agent, Assistive technology, Immersive virtual environments, User study design},
Owner = {ab280112},
Timestamp = {2016.08.01},
Url = {http://dx.doi.org/10.1007/978-3-319-47665-0_39}
}
An Integrated Approach for the Knowledge Discovery in Computer Simulation Models with a Multi-dimensional Parameter Space
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
@article{:/content/aip/proceeding/aipcp/10.1063/1.4952148,
author = "Khawli, Toufik Al and Gebhardt, Sascha and Eppelt, Urs and Hermanns, Torsten and Kuhlen, Torsten and Schulz, Wolfgang",
title = "An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space",
journal = "AIP Conference Proceedings",
year = "2016",
volume = "1738",
number = "1",
eid = 370003,
pages = "",
url = "http://scitation.aip.org/content/aip/proceeding/aipcp/10.1063/1.4952148;jsessionid=jy3FCznaGWpVQVNPYx765REW.x-aip-live-03",
doi = "http://dx.doi.org/10.1063/1.4952148"
}
Web-based Interactive and Visual Data Analysis for Ubiquitous Learning Analytics
Interactive visual data analysis is a well-established class of methods to gather knowledge from raw and complex data. A broad variety of examples can be found in literature presenting its applicability in various ways and different scientific domains. However, fully fledged solutions for visual analysis addressing learning analytics are still rare. Therefore, this paper will discuss visual and interactive data analysis for learning analytics by presenting best practices followed by a discussion of a general architecture combining interactive visualization employing the Information Seeking Mantra in conjunction with the paradigm of coordinated multiple views. Finally, by presenting a use case for ubiquitous learning analytics its applicability will be demonstrated with the focus on temporal and spatial relation of learning data. The data is gathered from a ubiquitous learning scenario offering information for students to identify learning partners and provides information to teachers enabling the adaption of their learning material.
@InProceedings{weyers2016a,
Title = {Web-based Interactive and Visual Data Analysis for Ubiquitous Learning Analytics},
Author = {Benjamin Weyers, Christian Nowke, Torsten Wolfgang Kuhlen, Mouri Kousuke, Hiroaki Ogata},
Booktitle = {First International Workshop on Learning Analytics Across Physical and Digital Spaces co-located with 6th International Conference on Learning Analytics \& Knowledge (LAK 2016)},
Year = {2016},
Pages = {65--69},
Editor = {Roberto Martinez-Maldonado, Davinia Hernandez-Leo},
Volume = {1601},
Abstract = {Interactive visual data analysis is a well-established class of methods to gather knowledge from raw and complex data. A broad variety of examples can be found in literature presenting its applicability in various ways and different scientific domains. However, fully fledged solutions for visual analysis addressing learning analytics are still rare. Therefore, this paper will discuss visual and interactive data analysis for learning analytics by presenting best practices followed by a discussion of a general architecture combining interactive visualization employing the Information Seeking Mantra in conjunction with the paradigm of coordinated multiple views. Finally, by presenting a use case for ubiquitous learning analytics its applicability will be demonstrated with the focus on temporal and spatial relation of learning data. The data is gathered from a ubiquitous learning scenario offering information
for students to identify learning partners and provides information to teachers enabling the adaption of their learning material.},
Keywords = {interactive analysis, web-based visualization, learning analytics},
Url = {http://ceur-ws.org/Vol-1601/}
}
Poster: Evaluation of Hands-Free HMD-Based Navigation Techniques for Immersive Data Analysis
To use the full potential of immersive data analysis when wearing a head-mounted display, the user has to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst’s usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best within the given task.
Poster: Automatic Generation of World in Miniatures for Realistic Architectural Immersive Virtual Environments
Orientation and wayfinding in architectural Immersive Virtual Environments (IVEs) are non-trivial, accompanying tasks which generally support the users’ main task. World in Miniatures (WIMs)— essentially 3D maps containing a scene replica—are an established approach to gain survey knowledge about the virtual world, as well as information about the user’s relation to it. However, for largescale, information-rich scenes, scaling and occlusion issues result in diminishing returns. Since there typically is a lack of standardized information regarding scene decompositions, presenting the inside of self-contained scene extracts is challenging.
Therefore, we present an automatic WIM generation workflow for arbitrary, realistic in- and outdoor IVEs in order to support users with meaningfully selected and scaled extracts of the IVE as well as corresponding context information. Additionally, a 3D user interface is provided to manually manipulate the represented extract.
» Show BibTeX
@InProceedings{Boensch2016b,
Title = {Automatic Generation of World in Miniatures for Realistic Architectural Immersive Virtual Environments},
Author = {Andrea B\"{o}nsch and Sebastian Freitag and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Reality Conference Poster Proceedings},
Year = {2016},
Pages = {155-156},
Abstract = {Orientation and wayfinding in architectural Immersive Virtual Environments (IVEs) are non-trivial, accompanying tasks which generally support the users’ main task. World in Miniatures (WIMs)—essentially 3D maps containing a scene replica—are an established approach to gain survey knowledge about the virtual world, as well as information about the user’s relation to it. However, for largescale, information-rich scenes, scaling and occlusion issues result in diminishing returns. Since there typically is a lack of standardized information regarding scene decompositions, presenting the inside of self-contained scene extracts is challenging.
Therefore, we present an automatic WIM generation workflow for arbitrary, realistic in- and outdoor IVEs in order to support users with meaningfully selected and scaled extracts of the IVE as well as corresponding context information. Additionally, a 3D user interface is provided to manually manipulate the represented extract.}
}
Poster: Formal Evaluation Strategies for Feature Tracking
We present an approach for tracking space-filling features based on a two-step algorithm utilizing two graph optimization techniques. First, one-to-one assignments between successive time steps are found by a matching on a weighted, bi-partite graph. Second, events are detected by computing an independent set on potential event explanations. The main objective of this work is investigating options for formal evaluation of complex feature tracking algorithms in the absence of ground truth data.
@INPROCEEDINGS{Schnorr2016, author = {Andrea Schnorr and Sebastian Freitag and Dirk Helmrich and Torsten W. Kuhlen and Bernd Hentschel}, title = {{F}ormal {E}valuation {S}trategies for {F}eature {T}racking}, booktitle = Proc # { the } # LDAV, year = {2016}, pages = {103--104}, abstract = { We present an approach for tracking space-filling features based on a two-step algorithm utilizing two graph optimization techniques. First, one-to-one assignments between successive time steps are found by a matching on a weighted, bi-partite graph. Second, events are detected by computing an independent set on potential event explanations. The main objective of this work is investigating options for formal evaluation of complex feature tracking algorithms in the absence of ground truth data.
}, doi = { 10.1109/LDAV.2016.7874339}}
Poster: Geometry-Aware Visualization of Performance Data
Phenomena in the performance behaviour of high-performance computing (HPC) applications can stem from the HPC system itself, from the application's code, but also from the simulation domain. In order to analyse the latter phenomena, we propose a system that visualizes profile-based performance data in its spatial context, i.e., on the geometry, in the simulation domain. It thus helps HPC experts but also simulation experts understand the performance data better. In addition, our tool reduces the initially large search space by automatically labelling large-variation views on the data which require detailed analysis.
@inproceedings {eurp.20161136,
booktitle = {EuroVis 2016 - Posters},
editor = {Tobias Isenberg and Filip Sadlo},
title = {{Geometry-Aware Visualization of Performance Data}},
author = {Vierjahn, Tom and Hentschel, Bernd and Kuhlen, Torsten W.},
year = {2016},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-015-4},
DOI = {10.2312/eurp.20161136},
pages = {37--39}
}
Poster: Tracking Space-Filling Features by Two-Step Optimization
We present a novel approach for tracking space-filling features, i.e., a set of features covering the entire domain. The assignment between successive time steps is determined by a two-step, global optimization scheme. First, a maximum-weight, maximal matching on a bi-partite graph is computed to provide one-to-one assignments between features of successive time steps. Second, events are detected in a subsequent step; here the matching step serves to restrict the exponentially large set of potential solutions. To this end, we compute an independent set on a graph representing conflicting event explanations. The method is evaluated by tracking dissipation elements, a structure definition from turbulent flow analysis.
Honorable Mention Award!
@inproceedings {eurp.20161146,
booktitle = {EuroVis 2016 - Posters},
editor = {Tobias Isenberg and Filip Sadlo},
title = {{Tracking Space-Filling Features by Two-Step Optimization}},
author = {Schnorr, Andrea and Freitag, Sebastian and Kuhlen, Torsten W. and Hentschel, Bernd},
year = {2016},
publisher = {The Eurographics Association},
pages = {77--79},
ISBN = {978-3-03868-015-4},
DOI = {10.2312/eurp.20161146}
}
Talk: Two Basic Aspects of Virtual Agents’ Behavior: Collision Avoidance and Presence Strategies
Virtual Agents (VAs) are embedded in virtual environments for two reasons: they enliven architectural scenes by representing more realistic situations, and they are dialogue partners. They can function as training partners such as representing students in a teaching scenario, or as assistants by, e.g., guiding users through a scene or by performing certain tasks either individually or in collaboration with the user. However, designing such VAs is challenging as various requirements have to be met. Two relevant factors will be briefly discussed in the talk: Collision Avoidance and Presence Strategies.
Integrating Visualizations into Modeling NEST Simulations
Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.
Level-of-Detail Modal Analysis for Real-time Sound Synthesis
Modal sound synthesis is a promising approach for real-time physically-based sound synthesis. A modal analysis is used to compute characteristic vibration modes from the geometry and material properties of scene objects. These modes allow an efficient sound synthesis at run-time, but the analysis is computationally expensive and thus typically computed in a pre-processing step. In interactive applications, however, objects may be created or modified at run-time. Unless the new shapes are known upfront, the modal data cannot be pre-computed and thus a modal analysis has to be performed at run-time. In this paper, we present a system to compute modal sound data at run-time for interactive applications. We evaluate the computational requirements of the modal analysis to determine the computation time for objects of different complexity. Based on these limits, we propose using different levels-of-detail for the modal analysis, using different geometric approximations that trade speed for accuracy, and evaluate the errors introduced by lower-resolution results. Additionally, we present an asynchronous architecture to distribute and prioritize modal analysis computations.
@inproceedings {vriphys.20151335,
booktitle = {Workshop on Virtual Reality Interaction and Physical Simulation},
editor = {Fabrice Jaillet and Florence Zara and Gabriel Zachmann},
title = {{Level-of-Detail Modal Analysis for Real-time Sound Synthesis}},
author = {Rausch, Dominik and Hentschel, Bernd and Kuhlen, Torsten W.},
year = {2015},
publisher = {The Eurographics Association},
ISBN = {978-3-905674-98-9},
DOI = {10.2312/vriphys.20151335}
pages = {61--70}
}
Accurate Contact Modeling for Multi-rate Single-point Haptic Rendering of Static and Deformable Environments
Common approaches for the haptic rendering of complex scenarios employ multi-rate simulation schemes. Here, the collision queries or the simulation of a complex deformable object are often performed asynchronously on a lower frequency, while some kind of intermediate contact representation is used to simulate interactions on the haptic rate. However, this can produce artifacts in the haptic rendering when the contact situation quickly changes and the intermediate representation is not able to reflect the changes due to the lower update rate. We address this problem utilizing a novel contact model. It facilitates the creation of contact representations that are accurate for a large range of motions and multiple simulation time-steps.We handle problematic convex contact regions using a local convex decomposition and special constraints for convex areas.We combine our accurate contact model with an implicit temporal integration scheme to create an intermediate mechanical contact representation, which reflects the dynamic behavior of the simulated objects. Moreover, we propose a new iterative solving scheme for the involved constrained dynamics problems.We increase the robustness of our method using techniques from trust region-based optimization. Our approach can be combined with standard methods for the modeling of deformable objects or constraint-based approaches for the modeling of, for instance, friction or joints. We demonstrate its benefits with respect to the simulation accuracy and the quality of the rendered haptic forces in multiple scenarios.
Best Paper Award!
Bimanual Haptic Simulation of Bone Fracturing for the Training of the Bilateral Sagittal Split Osteotomy
In this work we present a haptic training simulator for a maxillofacial procedure comprising the controlled breaking of the lower mandible. To our knowledge the haptic simulation of fracture is seldom addressed, especially when a realistic breaking behavior is required. Our system combines bimanual haptic interaction with a simulation of the bone based on well-founded methods from fracture mechanics. The system resolves the conflict between simulation complexity and haptic real-time constraints by employing a dedicated multi-rate simulation and a special solving strategy for the occurring mechanical equations. Furthermore, we present remeshing-free methods for collision detection and visualization which are tailored for an efficient treatment of the topological changes induced by the fracture. The methods have been successfully implemented and tested in a simulator prototype using real pathological data and a semi-immersive VR-system with two haptic devices. We evaluated the computational efficiency of our methods and show that a stable and responsive haptic simulation of the fracturing has been achieved.
A Framework for Developing Flexible Virtual-Reality-centered Annotation Systems
The act of note-taking is an essential part of the data analysis process. It has been realized in form of various annotation systems that have been discussed in many publications. Unfortunately, the focus usually lies on high-level functionality, like interaction metaphors and display strategies. We argue that it is worthwhile to also consider software engineering aspects. Annotation systems often share similar functionality that can potentially be factored into reusable components with the goal to speed up the creation of new annotation systems. At the same time, however, VR-centered annotation systems are not only subject to application-specific requirements, but also to those arising from differences between the various VR platforms, like desktop VR setups or CAVEs. As a result, it is usually necessary to build application-specific VR-centered annotation systems from scratch instead of reusing existing components.
To improve this situation, we present a framework that provides reusable and adaptable building blocks to facilitate the creation of flexible annotation systems for VR applications. We discuss aspects ranging from data representation over persistence to the integration of new data types and interaction metaphors, especially in context of multi-platform applications. To underpin the benefits of such an approach and promote the proposed concepts, we describe how the framework was applied to several of our own projects.
Simulation-based Ultrasound Training Supported by Annotations, Haptics and Linked Multimodal Views
When learning ultrasound (US) imaging, trainees must learn how to recognize structures, interpret textures and shapes, and simultaneously register the 2D ultrasound images to their 3D anatomical mental models. Alleviating the cognitive load imposed by these tasks should free the cognitive resources and thereby improve the learning process. We argue that the amount of cognitive load that is required to mentally rotate the models to match the images to them is too large and therefore negatively impacts the learning process. We present a 3D visualization tool that allows the user to naturally move a 2D slice and navigate around a 3D anatomical model. The slice is displayed in-place to facilitate the registration of the 2D slice in its 3D context. Two duplicates are also shown externally to the model; the first is a simple rendered image showing the outlines of the structures and the second is a simulated ultrasound image. Haptic cues are also provided to the users to help them maneuver around the 3D model in the virtual space. With the additional display of annotations and information of the most important structures, the tool is expected to complement the available didactic material used in the training of ultrasound procedures.
Comparison and Evaluation of Viewpoint Quality Estimation Algorithms for Immersive Virtual Environments
The knowledge of which places in a virtual environment are interesting or informative can be used to improve user interfaces and to create virtual tours. Viewpoint Quality Estimation algorithms approximate this information by calculating quality scores for viewpoints. However, even though several such algorithms exist and have also been used, e.g., in virtual tour generation, they have never been comparatively evaluated on virtual scenes. In this work, we introduce three new Viewpoint Quality Estimation algorithms, and compare them against each other and six existing metrics, by applying them to two different virtual scenes. Furthermore, we conducted a user study to obtain a quantitative evaluation of viewpoint quality. The results reveal strengths and limitations of the metrics on actual scenes, and provide recommendations on which algorithms to use for real applications.
@InProceedings{Freitag2015,
Title = {{Comparison and Evaluation of Viewpoint Quality Estimation Algorithms for Immersive Virtual Environments}},
Author = {Freitag, Sebastian and Weyers, Benjamin and B\"{o}nsch, Andrea and Kuhlen, Torsten W.},
Booktitle = {ICAT-EGVE 2015 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
Year = {2015},
Pages = {53-60},
Doi = {10.2312/egve.20151310}
}
Low-Cost Vision-Based Multi-Person Foot Tracking for CAVE Systems with Under-Floor Projection
In this work, we present an approach for tracking the feet of multiple users in CAVE-like systems with under-floor projection. It is based on low-cost consumer cameras, does not require users to wear additional equipment, and can be installed without modifying existing components. If the brightness of the floor projection does not contain too much variation, the feet of several people can be successfully and precisely tracked and assigned to individuals. The tracking data can be used to enable or enhance user interfaces like Walking-in-Place or torso-directed steering, provide audio feedback for footsteps, and improve the immersive experience for multiple users.
BlowClick: A Non-Verbal Vocal Input Metaphor for Clicking
In contrast to the wide-spread use of 6-DOF pointing devices, freehand user interfaces in Immersive Virtual Environments (IVE) are non-intrusive. However, for gesture interfaces, the definition of trigger signals is challenging. The use of mechanical devices, dedicated trigger gestures, or speech recognition are often used options, but each comes with its own drawbacks. In this paper, we present an alternative approach, which allows to precisely trigger events with a low latency using microphone input. In contrast to speech recognition, the user only blows into the microphone. The audio signature of such blow events can be recognized quickly and precisely. The results of a user study show that the proposed method allows to successfully complete a standard selection task and performs better than expected against a standard interaction device, the Flystick.
Cirque des Bouteilles: The Art of Blowing on Bottles
Making music by blowing on bottles is fun but challenging. We introduce a novel 3D user interface to play songs on virtual bottles. For this purpose the user blows into a microphone and the stream of air is recreated in the virtual environment and redirected to virtual bottles she is pointing to with her fingers. This is easy to learn and subsequently opens up opportunities for quickly switching between bottles and playing groups of them together to form complex melodies. Furthermore, our interface enables the customization of the virtual environment, by means of moving bottles, changing their type or filling level.
Packet-Oriented Streamline Tracing on Modern SIMD Architectures
The advection of integral lines is an important computational kernel in vector field visualization. We investigate how this kernel can profit from vector (SIMD) extensions in modern CPUs. As a baseline, we formulate a streamline tracing algorithm that facilitates auto-vectorization by an optimizing compiler. We analyze this algorithm and propose two different optimizations. Our results show that particle tracing does not per se benefit from SIMD computation. Based on a careful analysis of the auto-vectorized code, we propose an optimized data access routine and a re-packing scheme which increases average SIMD efficiency. We evaluate our approach on three different, turbulent flow fields. Our optimized approaches increase integration performance up to 5:6 over our baseline measurement. We conclude with a discussion of current limitations and aspects for future work.
@INPROCEEDINGS{Hentschel2015,
author = {Bernd Hentschel and Jens Henrik G{\"o}bbert and Michael Klemm and
Paul Springer and Andrea Schnorr and Torsten W. Kuhlen},
title = {{P}acket-{O}riented {S}treamline {T}racing on {M}odern {SIMD}
{A}rchitectures},
booktitle = {Proceedings of the Eurographics Symposium on Parallel Graphics
and Visualization},
year = {2015},
pages = {43--52},
abstract = {The advection of integral lines is an important computational
kernel in vector field visualization. We investigate
how this kernel can profit from vector (SIMD) extensions in modern CPUs. As a
baseline, we formulate a streamline
tracing algorithm that facilitates auto-vectorization by an optimizing compiler.
We analyze this algorithm and
propose two different optimizations. Our results show that particle tracing does
not per se benefit from SIMD computation.
Based on a careful analysis of the auto-vectorized code, we propose an optimized
data access routine
and a re-packing scheme which increases average SIMD efficiency. We evaluate our
approach on three different,
turbulent flow fields. Our optimized approaches increase integration performance
up to 5.6x over our baseline
measurement. We conclude with a discussion of current limitations and aspects
for future work.}
}
An Integrative Tool Chain for Collaborative Virtual Museums in Immersive Virtual Environments
Various conceptual approaches for the creation and presentation of virtual museums can be found. However, less work exists that concentrates on collaboration in virtual museums. The support of collaboration in virtual museums provides various benefits for the visit as well as the preparation and creation of virtual exhibits. This paper addresses one major problem of collaboration in virtual museums: the awareness of visitors. We use a Cave Automated Virtual Environment (CAVE) for the visualization of generated virtual museums to offer simple awareness through co-location. Furthermore, the use of smartphones during the visit enables the visitors to create comments or to access exhibit related metadata. Thus, the main contribution of this ongoing work is the presentation of a workflow that enables an integrated deployment of generic virtual museums into a CAVE, which will be demonstrated by deploying the virtual Leopold Fleischhacker Museum.
Poster: Scalable Metadata In- and Output for Multi-platform Data Annotation Applications
Metadata in- and output are important steps within the data annotation process. However, selecting techniques that effectively facilitate these steps is non-trivial, especially for applications that have to run on multiple virtual reality platforms. Not all techniques are applicable to or available on every system, requiring to adapt workflows on a per-system basis. Here, we describe a metadata handling system based on Android's Intent system that automatically adapts workflows and thereby makes manual adaption needless.
Poster: Vision-based Multi-Person Foot Tracking for CAVE Systems with Under-Floor Projection
In this work, we present an approach for tracking the feet of mul- tiple users in CAVE-like systems with under-floor projection. It is based on low-cost consumer cameras, does not require users to wear additional equipment, and can be installed without modifying existing components. If the brightness of the floor projection does not contain too much variation, the feet of several people can be reliably tracked and assigned to individuals.
Poster: Effects and Applicability of Rotation Gain in CAVE-like Environments
In this work, we report on a pilot study we conducted, and on a study design, to examine the effects and applicability of rotation gain in CAVE-like virtual environments. The results of the study will give recommendations for the maximum levels of rotation gain that are reasonable in algorithms for enlarging the virtual field of regard or redirected walking.
Poster: flapAssist: How the Integration of VR and Visualization Tools Fosters the Factory Planning Process
Virtual Reality (VR) systems are of growing importance to aid decision support in the context of the digital factory, especially factory layout planning. While current solutions either focus on virtual walkthroughs or the visualization of more abstract information, a solution that provides both, does currently not exist. To close this gap, we present a holistic VR application, called Factory Layout Planning Assistant (flapAssist). It is meant to serve as a platform for planning the layout of factories, while also providing a wide range of analysis features. By being scalable from desktops to CAVEs and providing a link to a central integration platform, flapAssist integrates well in established factory planning workflows.
Poster: Tracking Space-Filling Structures in Turbulent Flows
We present a novel approach for tracking space-filling features, i.e. a set of features which covers the entire domain. In contrast to previous work, we determine the assignment between features from successive time steps by computing a globally optimal, maximum-weight, maximal matching on a weighted, bi-partite graph. We demonstrate the method's functionality by tracking dissipation elements (DEs), a space-filling structure definition from turbulent flow analysis. The ability to track DEs over time enables researchers from fluid mechanics to extend their analysis beyond the assessment of static flow fields to time-dependent settings.
@INPROCEEDINGS{Schnorr2015,
author = {Andrea Schnorr and Jens-Henrik Goebbert and Torsten W. Kuhlen and Bernd Hentschel},
title = {{T}racking {S}pace-{F}illing {S}tructures in {T}urbulent {F}lows},
booktitle = Proc # { the } # LDAV,
year = {2015},
pages = {143--144},
abstract = {We present a novel approach for tracking space-filling features, i.e. a set of features which covers the entire domain. In contrast to previous work, we determine the assignment between features from successive time steps by computing a globally optimal, maximum-weight, maximal matching on a weighted, bi-partite graph. We demonstrate the method's functionality by tracking dissipation elements (DEs), a space-filling structure definition from turbulent flow analysis. The abilitytotrack DEs over time enables researchers from fluid mechanics to extend their analysis beyond the assessment of static flow fields to time-dependent settings.},
doi = {10.1109/LDAV.2015.7348089},
keywords = {Feature Tracking, Weighted, Bi-Partite Matching, Flow
Visualization, Dissipation Elements}
}
Ein Konzept zur Integration von Virtual Reality Anwendungen zur Verbesserung des Informationsaustauschs im Fabrikplanungsprozess - A Concept for the Integration of Virtual Reality Applications to Improve the Information Exchange within the Factory Planning Process
Factory planning is a highly heterogeneous process that involves various expert groups at the same time. In this context, the communication between different expert groups poses a major challenge. One reason for this lies in the differing domain knowledge of individual groups. However, since decisions made within one domain usually have an effect on others, it is essential to make these domain interactions visible to all involved experts in order to improve the overall planning process. In this paper, we present a concept that facilitates the integration of two separate virtual-reality- and visualization analysis tools for different application domains of the planning process. The concept was developed in context of the Virtual Production Intelligence and aims at creating an approach to making domain interactions visible, such that the aforementioned challenges can be mitigated.
@Article{Pick2015,
Title = {“Ein Konzept zur Integration von Virtual Reality Anwendungen zur Verbesserung des Informationsaustauschs im Fabrikplanungsprozess”},
Author = {S. Pick, S. Gebhardt, B. Hentschel, T. W. Kuhlen, R. Reinhard, C. Büscher, T. Al Khawli, U. Eppelt, H. Voet, and J. Utsch},
Journal = {Tagungsband 12. Paderborner Workshop Augmented \& Virtual Reality in der Produktentstehung},
Year = {2015},
Pages = {139--152}
}
Ein Ansatz zur Softwaretechnischen Integration von Virtual Reality Anwendungen am Beispiel des Fabrikplanungsprozesses - An Approach for the Softwaretechnical Integration of Virtual Reality Applications by the Example of the Factory Planning Process
The integration of independent applications is a complex task from a software engineering perspective. Nonetheless, it entails significant benefits, especially in the context of Virtual Reality (VR) supported factory planning, e.g., to communicate interdependencies between different domains. To emphasize this aspect, we integrated two independent VR and visualization applications into a holistic planning solution. Special focus was put on parallelization and interaction aspects, while also considering more general requirements of such an integration process. In summary, we present technical solutions for the effective integration of several VR applications into a holistic solution with the integration of two applications from the context of factory planning with special focus on parallelism and interaction aspects. The effectiveness of the approach is demonstrated by performance measurements.
@Article{Gebhardt2015,
Title = {“Ein Ansatz zur Softwaretechnischen Integration von Virtual Reality Anwendungen am Beispiel des Fabrikplanungsprozesses”},
Author = {S. Gebhardt, S. Pick, B. Hentschel, T. W. Kuhlen, R. Reinhard, and C. Büscher},
Journal = {Tagungsband 12. Paderborner Workshop Augmented \& Virtual Reality in der Produktentstehung},
Year = {2015},
Pages = {153--166}
}
Immersive Art: Using a CAVE-like Virtual Environment for the Presentation of Digital Works of Art
Digital works of art are often created using some kind of modeling software, like Cinema4D. Usually they are presented in a non-interactive form, like large Diasecs, and can thus only be experienced by passive viewing. To explore alternative, more captivating presentation channels, we investigate the use of a CAVE virtual reality (VR) system as an immersive and interactive presentation platform in this paper. To this end, in a collaboration with an artist, we built an interactive VR experience from one of his existing works. We provide details on our design and report on the results of a qualitative user study.
» Show BibTeX
@Article{Pick2015,
Title = {{Immersive Art: Using a CAVE-like Virtual Environment for the Presentation of Digitial Works of Art}},
Author = {Pick, Sebastian and B\"{o}nsch, Andrea and Scully, Dennis and Kuhlen, Torsten W.},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 12. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2015},
Pages = {10-21},
ISSN = {978-3-8440-3868-2},
Publisher = {Shaker Verlag}
}
Efficient Modal Sound Synthesis on GPUs
Modal sound synthesis is a useful method to interactively generate sounds for Virtual Environments. Forces acting on objects excite modes, which then have to be accumulated to generate the output sound. Due to the high audio sampling rate, algorithms using the CPU typically can handle only a few actively sounding objects. Additionally, force excitation should be applied at a high sampling rate. We present different algorithms to compute the synthesized sound using a GPU, and compare them to CPU implementations. The GPU algorithms shows a significantly higher performance, and allows many sounding objects simultaneously.
Quo Vadis CAVE – Does Immersive Visualization Still Matter?
More than two decades have passed since the introduction of the CAVE (Cave Automatic Virtual Environment), a landmark in the development of VR.1 The CAVE addressed two major issues with head-mounted displays of the era. First, it provided an unprecedented field of view, greatly improving the Feeling of presence in a virtual environment (VE). Second, this feeling was ampli ed because users didn’t have to rely on a virtual representation of their own bodies or parts thereof. Instead, they could physically enter the virtual space. Scientific visualization had been promulgated as a killer app for VR technology almost from day one. With the CAVE’s inception, it became possible to “put users within their data.” Proponents predicted two key advantages. First, immersive VR promised faster, more comprehensive understanding of complex, spatial relationships owing to head-tracked, stereoscopic rendering. Second, it would provide a more natural user interface, specifically for spatial interaction. In a seminal article, Andy van Dam and his colleagues proposed VR-enabled visualization as a midterm solution to the “accelerating data crisis.”2 That is, the ability to generate data had for some time outpaced the ability to analyze it. Over the years, a number of studies have investigated the effects of VR-based visualizations in speci c application scenarios. Recently, Bireswar Laha and his colleagues provided more general, empirical evidence for its benefits. Although VR and scienti c visualization have matured and many of the original technical limitations have been resolved, immersive visualization has yet to nd the widespread, everyday use that was claimed in the early days. At the same time, the demand for scalable visualization solutions is greater than ever. If anything, the gap between data generation and analysis capabilities has widened even more. So, two questions arise. What should such scalable solutions look like, and what requirements arise regarding the underlying hardware and software and the overall methodology?
Preliminary Bone Sawing Model for a Virtual Reality-Based Training Simulator of Bilateral Sagittal Split Osteotomy
Successful bone sawing requires a high level of skill and experience, which could be gained by the use of Virtual Reality-based simulators. A key aspect of these medical simulators is realistic force feedback. The aim of this paper is to model the bone sawing process in order to develop a valid training simulator for the bilateral sagittal split osteotomy, the most often applied corrective surgery in case of a malposition of the mandible. Bone samples from a human cadaveric mandible were tested using a designed experimental system. Image processing and statistical analysis were used for the selection of four models for the bone sawing process. The results revealed a polynomial dependency between the material removal rate and the applied force. Differences between the three segments of the osteotomy line and between the cortical and cancellous bone were highlighted.
Virtuelle Realität als Gegenstand und Werkzeug der Wissenschaft
Dieser Beitrag stellt die Disziplin der Virtuellen Realität (VR) als eine wichtige Ausprägung von Virtualität vor. Die VR wird als eine spezielle Form der Mensch-Computer-Schnittstelle verstanden, die mehrere menschliche Sinne in die Interaktion einbezieht und beim Benutzer die Illusion hervorruft, eine computergenerierte künstliche Welt als real wahrzunehmen. Der Beitrag zeigt auf, dass umfangreiche Methodenforschung über mehrere Disziplinen hinweg notwendig ist um dieses ultimative Ziel zu erreichen oder ihm zumindest näher zu kommen. Schließlich werden drei unterschiedliche Anwendungen vorgestellt welche demonstrieren, auf welch vielfältige Art und Weise die VR als Werkzeug in den Wissenschaften eingesetzt werden kann.
Reorientation in Virtual Environments using Interactive Portals
Real walking is the most natural method of navigation in virtual environments. However, physical space limitations often prevent or complicate its continuous use. Thus, many real walking interfaces, among them redirected walking techniques, depend on a reorientation technique that redirects the user away from physical boundaries when they are reached. However, existing reorientation techniques typically actively interrupt the user, or depend on the application of rotation gain that can lead to simulator sickness. In our approach, the user is reoriented using portals. While one portal is placed automatically to guide the user to a safe position, she controls the target selection and physically walks through the portal herself to perform the reorientation. In a formal user study we show that the method does not cause additional simulator sickness, and participants walk more than with point-and-fly navigation or teleportation, at the expense of longer completion times.
Best Technote!
@INPROCEEDINGS{freitag2014,
author={S. Freitag and D. Rausch and T. Kuhlen},
booktitle={2014 IEEE Symposium on 3D User Interfaces (3DUI)},
title={{Reorientation in Virtual Environments Using Interactive Portals}},
year={2014},
pages={119-122},
doi={10.1109/3DUI.2014.6798852},
month={March},
}
Advanced Virtual Reality and Visualization Support for Factory Layout Planning
Recently, more and more Virtual Reality (VR) and visualization solutions to support the factory layout planning process have been presented. On the one hand, VR enables planners to create cost-effective virtual prototypes and to perform virtual walkthroughs, e.g., to verify proposed layouts. On the other hand, visualization helps to gain insight into simulation results that, e.g., describe the various interdependencies between machines, such as material flows. In order to create truly effective tools based on VR and visualization, the right techniques have to be chosen and adapted to the specific problem. However, the solutions published so far usually do not exploit these technologies to their full potential.
To address this situation, we present a VR-based planning assistant that offers advanced visualization functionality that furthers the understanding of planning-relevant parameters, while also relying on established techniques. In order to realize a useful approach, the assistant fulfills three central requirements:
- A smooth integration of the assistant into existing workflows is essential in order to not disrupt them. Consequently, existing tools need to be properly integrated and a mechanism for data exchange with these tools has to be provided.
- Visualization is the main means of facilitating insight. Instead of only displaying factory models, advanced techniques to visualize more abstract quantities, like material flows or process chains, have to be provided.
- VR systems vary in the degree of immersion they offer, ranging from non-immersive desktop systems to fully immersive Cave Automatic Virtual Environment (CAVE) systems. Scalability among these systems allows adapting high-end installations as well as cost-effective solutions. However, to ensure good scalability, devising a flexible system abstraction and a unified interaction concept are essential.
The base for our planning assistant is an immersive VR (IVR) system in form of a CAVE. Our solution allows performing virtual walkthroughs and offers additional visualization techniques for planning relevant data.
A 3D Collaborative Virtual Environment to Integrate Immersive Virtual Reality into Factory Planning Processes
In the recent past, efforts have been made to adopt immersive virtual reality (IVR) systems as a means for design reviews in factory layout planning. While several solutions for this scenario have been developed, their integration into existing planning workflows has not been discussed yet. From our own experience of developing such a solution, we conclude that the use of IVR systems-like CAVEs-is rather disruptive to existing workflows. One major reason for this is that IVR systems are not available everywhere due to their high costs and large physical footprint. As a consequence, planners have to travel to sites offering such systems which is especially prohibitive as planners are usually geographically dispersed. In this paper, we present a concept for integrating IVR systems into the factory planning process by means of a 3D collaborative virtual environment (3DCVE) without disrupting the underlying planning workflow. The goal is to combine non-immersive and IVR systems to facilitate collaborative walkthrough sessions. However, this scenario poses unique challenges to interactive collaborative work that to the best of our knowledge have not been addressed so far. In this regard, we discuss approaches to viewpoint sharing, telepointing and annotation support that are geared towards distributed heterogeneous 3DCVEs.
Geometrically Limited Constraints for Physics-based Haptic Rendering
In this paper a single-point haptic rendering technique is proposed which uses a constraint-based physics simulation approach. Geometries are sampled using point shell points, each associated with a small disk, that jointly result in a closed surface for the whole shell. The geometric information is incorporated into the constraint-based simulation using newly introduced geometrically limited contact constraints which are active in a restricted region corresponding to the disks in contact. The usage of disk constraints not only creates closed surfaces, which is important for single-point rendering, but also tackles the problem of over-constraint contact situations in convex geometric setups. Furthermore, an iterative solving scheme for dynamic problems under consideration of the proposed constraint type is proposed. Finally, an evaluation of the simulation approach shows the advantages compared to standard contact constraints regarding the quality of the rendered forces.
Data-flow Oriented Software Framework for the Development of Haptic-enabled Physics Simulations
This paper presents a software framework that supports the development of haptic-enabled physics simulations. The framework provides tools aiming to facilitate a fast prototyping process by utilizing component and flow-oriented architectures, while maintaining the capability to create efficient code which fulfills the performance requirements induced by the target applications. We argue that such a framework should not only ease the creation of prototypes but also help to effectively and efficiently evaluate them. To this end, we provide analysis tools and the possibility to build problem oriented evaluation environments based on the described software concepts. As motivating use case, we present a project with the goal to develop a haptic-enabled medical training simulator for a maxillofacial procedure. With this example, we demonstrate how the described framework can be used to create a simulation architecture for a complex haptic simulation and how the tools assist in the prototyping process.
An Evaluation of a Smart-Phone-Based Menu System for Immersive Virtual Environments
System control is a crucial task for many virtual reality applications and can be realized in a broad variety of ways, whereat the most common way is the use of graphical menus. These are often implemented as part of the virtual environment, but can also be displayed on mobile devices. Until now, many systems and studies have been published on using mobile devices such as personal digital assistants (PDAs) to realize such menu systems. However, most of these systems have been proposed way before smartphones existed and evolved to everyday companions for many people. Thus, it is worthwhile to evaluate the applicability of modern smartphones as carrier of menu systems for immersive virtual environments. To do so, we implemented a platform-independent menu system for smartphones and evaluated it in two different ways. First, we performed an expert review in order to identify potential design flaws and to test the applicability of the approach for demonstrations of VR applications from a demonstrator's point of view. Second, we conducted a user study with 21 participants to test user acceptance of the menu system. The results of the two studies were contradictory: while experts appreciated the system very much, user acceptance was lower than expected. From these results we could draw conclusions on how smartphones should be used to realize system control in virtual environments and we could identify connecting factors for future research on the topic.
Integration of VR and Visualization Tools to Foster the Factory Planning Process
Recently, virtual reality (VR) and visualization have been increasingly employed to facilitate various tasks in factory planning processes. One major challenge in this context lies in the exchange of information between expert groups concerned with distinct planning tasks in order to make planners aware of inter-dependencies. For example, changes to the configuration of individual machines can have an effect on the overall production performance and vice versa. To this end, we developed VR- and visualization-based planning tools for two distinct planning tasks for which we present an integration concept that facilitates information exchange between these tools. The first application's goal is to facilitate layout planning by means of a CAVE system. The high degree of immersion offered by this system allows users to judge spatial relations in entire factories through cost-effective virtual walkthroughs. Additionally, information like material flow data can be visualized within the virtual environment to further assist planners to comprehensively evaluate the factory layout. Another application focuses on individual machines with the goal to help planners find ideal configurations by providing a visualization solution to explore the multi-dimensional parameter space of a single machine. This is made possible through the use of meta-models of the parameter space that are then visualized by means of the concept of Hyperslice. In this paper we present a concept that shows how these applications can be integrated into one comprehensive planning tool that allows for planning factories while considering factors of different planning levels at the same time. The concept is backed by Virtual Production Intelligence (VPI), which integrates data from different levels of factory processes, while including additional data sources and algorithms to provide further information to be used by the applications. In conclusion, we present an integration concept for VR- and visualization-based software tools that facilitates the communication of interdependencies between different factory planning tasks. As the first steps towards creating a comprehensive factory planning solution, we demonstrate the integration of the aforementioned two use-cases by applying VPI. Finally, we review the proposed concept by discussing its benefits and pointing out potential implementation pitfalls.
An Unusual Linker and an Unexpected Node: CaCl2 Dumbbells Linked by Proline to Form Square Lattice Networks
Four new structures based on CaCl2 and proline are reported, all with an unusual Cl–Ca–Cl moiety. Depending on the stoichiometry and the chirality of the amino acid, this metal dihalide fragment represents the core of a mononuclear Ca complex or may be linked by the carboxylate to form extended structures. A cisoid coordination of the halide atoms at the calcium cation is encountered in a chain polymer. In the 2D structures, CaCl2 dumbbells act as nodes and are crosslinked by either enantiomerically pure or racemic proline to form square lattice nets. Extensive database searches and topology tests prove that this structure type is rare for MCl2 dumbbells in general and unprecedented for Ca compounds.Four new structures based on CaCl2 and proline are reported, all with an unusual Cl–Ca–Cl moiety. Depending on the stoichiometry and the chirality of the amino acid, this metal dihalide fragment represents the core of a mononuclear Ca complex or may be linked by the carboxylate to form extended structures. A cisoid coordination of the halide atoms at the calcium cation is encountered in a chain polymer. In the 2D structures, CaCl2 dumbbells act as nodes and are crosslinked by either enantiomerically pure or racemic proline to form square lattice nets. Extensive database searches and topology tests prove that this structure type is rare for MCl2 dumbbells in general and unprecedented for Ca compounds.
@Article{Lamberts2014,
Title = {{An Unusual Linker and an Unexpected Node: CaCl2 Dumbbells Linked by Proline to Form Square Lattice Networks}},
Author = {Lamberts, Kevin and Porsche, Sven and Hentschel, Bernd and Kuhlen, Torsten and Englert, Ulli},
Journal = {CrystEngComm},
Year = {2014},
Pages = {3305-3311},
Volume = {16},
Doi = {10.1039/C3CE42357C},
Issue = {16},
Publisher = {The Royal Society of Chemistry},
Url = {http://dx.doi.org/10.1039/C3CE42357C}
}
The Human Brain Project - Chances and Challenges for Cognitive Systems
The Human Brain Project is one of the largest scientific initiatives dedicated to the research of the human brain worldwide. Over 80 research groups from a broad variety of scientific areas, such as neuroscience, simulation science, high performance computing, robotics, and visualization work together in this European research initiative. This work at hand will identify certain chances and challenges for cognitive systems engineering resulting from the HBP research activities. Beside the main goal of the HBP gathering deeper insights into the structure and function of the human brain, cognitive system research can directly benefit from the creation of cognitive architectures, the simulation of neural networks, and the application of these in context of (neuro-)robotics. Nevertheless, challenges arise regarding the utilization and transformation of these research results for cognitive systems, which will be discussed in this paper. Tools necessary to cope with these challenges are visualization techniques helping to understand and gain insights into complex data. Therefore, this paper presents a set of visualization techniques developed at the Virtual Reality Group at the RWTH Aachen University.
@inproceedings{Weyers2014,
author = {Weyers, Benjamin and Nowke, Christian and H{\"{a}}nel, Claudia and Zielasko, Daniel and Hentschel, Bernd and Kuhlen, Torsten},
booktitle = {Workshop Kognitive Systeme: Mensch, Teams, Systeme und Automaten},
title = {{The Human Brain Project – Chances and Challenges for Cognitive Systems}},
year = {2014}
}
Interactive Volume Rendering for Immersive Virtual Environments
Immersive virtual environments (IVEs) are an appropriate platform for 3D data visualization and exploration as, for example, the spatial understanding of these data is facilitated by stereo technology. However, in comparison to desktop setups a lower latency and thus a higher frame rate is mandatory. In this paper we argue that current realizations of direct volume rendering do not allow for a desirable visualization w.r.t. latency and visual quality that do not impair the immersion in virtual environments. To this end, we analyze published acceleration techniques and discuss their potential in IVEs; furthermore, head tracking is considered as a main challenge but also a starting point for specific optimization techniques.
@inproceedings{Hanel2014,
author = {H{\"{a}}nel, Claudia and Weyers, Benjamin and Hentschel, Bernd and Kuhlen, Torsten W.},
booktitle = {IEEE VIS International Workshop on 3DVis: Does 3D really make sense for Data Visualization?},
title = {{Interactive Volume Rendering for Immersive Virtual Environments}},
year = {2014}
}
Visualization of Memory Access Behavior on Hierarchical NUMA Architectures
The available memory bandwidth of existing high performance computing platforms turns out as being more and more the limitation to various applications. Therefore, modern microarchitectures integrate the memory controller on the processor chip, which leads to a non-uniform memory access behavior of such systems. This access behavior in turn entails major challenges in the development of shared memory parallel applications. An improperly implemented memory access functionality results in a bad ratio between local and remote memory access, and causes low performance on such architectures. To address this problem, the developers of such applications rely on tools to make these kinds of performance problems visible. This work presents a new tool for the visualization of performance data of the non-uniform memory access behavior. Because of the visual design of the tool, the developer is able to judge the severity of remote memory access in a time-dependent simulation, which is currently not possible using existing tools.
Poster: Visualizing Geothermal Simulation Data with Uncertainty
Simulations of geothermal reservoirs inherently contain uncertainty due to the fact that the underlying physical models are created from sparse data. Moreover, this uncertainty often cannot be completely expressed by simple key measures (e.g., mean and standard deviation), as the distribution of possible values is often not unimodal. Nevertheless, existing visualizations of these simulation data often completely neglect displaying the uncertainty, or are limited to a mean/variance representation. We present an approach to visualize geothermal simulation data that deals with both cases: scalar uncertainties as well as general ensembles of data sets. Users can interactively define two-dimensional transfer functions to visualize data and uncertainty values directly, or browse a 2D scatter plot representation to explore different possibilities in an ensemble.
Poster: Guided Tour Creation in Immersive Virtual Environments
Guided tours have been found to be a good approach to introducing users to previously unknown virtual environments and to allowing them access to relevant points of interest. Two important tasks during the creation of guided tours are the definition of views onto relevant information and their arrangement into an order in which they are to be visited. To allow a maximum of flexibility an interactive approach to these tasks is desirable. To this end, we present and evaluate two approaches to the mentioned interaction tasks in this paper. The first approach is a hybrid 2D/3D interaction metaphor in which a tracked tablet PC is used as a virtual digital camera that allows to specify and order views onto the scene. The second one is a purely 3D version of the first one, which does not require a tablet PC. Both approaches were compared in an initial user study, whose results indicate a superiority of the 3D over the hybrid approach.
@InProceedings{Pick2014,
Title = {{P}oster: {G}uided {T}our {C}reation in {I}mmersive {V}irtual {E}nvironments},
Author = {Sebastian Pick and Andreas B\"{o}nsch and Irene Tedjo-Palczynski and Bernd Hentschel and Torsten Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces (3DUI), 2014},
Year = {2014},
Month = {March},
Pages = {151-152},
Doi = {10.1109/3DUI.2014.6798865},
Url = {http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?arnumber=6798865}
}
Poster: Interactive 3D Force-Directed Edge Bundling on Clustered Edges
Graphs play an important role in data analysis. Especially, graphs with a natural spatial embedding can benefit from a 3D visualization. But even more then in 2D, graphs visualized as intuitively readable 3D node-link diagrams can become very cluttered. This makes graph exploration and data analysis difficult. For this reason, we focus on the challenge of reducing edge clutter by utilizing edge bundling. In this paper we introduce a parallel, edge cluster based accelerator for the force-directed edge bundling algorithm presented in [Holten2009]. This opens up the possibility for user interaction during and after both the clustering and the bundling.
Interactive Definition of Discrete Color Maps for Volume Rendered Data in Immersive Virtual Environments
The visual discrimination of different structures in one or multiple combined volume data sets is generally done with individual transfer functions that can usually be adapted interactively. Immersive virtual environments support the depth perception and thus the spatial orientation in these volume visualizations. However, complex 2D menus for elaborate transfer function design cannot be easily integrated. We therefore present an approach for changing the color mapping during volume exploration with direct volume interaction and an additional 3D widget. In this way we incorporate the modification of a color mapping for a large number of discretely labeled brain areas in an intuitive way into the virtual environment. We use our approach for the analysis of a patient’s data with a brain tissue degenerating disease to allow for an interactive analysis of affected regions.
@inproceedings{Hanel2014a,
address = {Minneapolis},
author = {H{\"{a}}nel, Claudia and Freitag, Sebastian and Hentschel, Bernd and Kuhlen, Torsten},
booktitle = {2nd International Workshop on Immersive Volumetric Interaction (WIVI 2014) at IEEE Virtual Reality 2014},
editor = {Banic, Amy and O'Leary, Patrick and Laha, Bireswar},
title = {{Interactive Definition of Discrete Color Maps for Volume Rendered Data in Immersive Virtual Environments}},
year = {2014}
}
Software Phantom with Realistic Speckle Modeling for Validation of Image Analysis Methods in Echocardiography
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not flexible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to fill this gap and provide a flexible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
Interactive 3D Visualization of Structural Changes in the Brain of a Person With Corticobasal Syndrome
The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions.
@article{Hanel2014b,
author = {H{\"{a}}nel, Claudia and Pieperhoff, Peter and Hentschel, Bernd and Amunts, Katrin and Kuhlen, Torsten},
issn = {1662-5196},
journal = {Frontiers in Neuroinformatics},
number = {42},
pmid = {24847243},
title = {{Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome.}},
url = {http://journal.frontiersin.org/article/10.3389/fninf.2014.00042/abstract},
volume = {8},
year = {2014}
}
Failure Mode and Effects Analysis in Designing a Virtual Reality-Based Training Simulator for Bilateral Sagittal Split Osteotomy
Virtual reality-based simulators offer a cost-effective and efficient alternative to traditional medical training and planning. Developing a simulator that enables the training of medical skills and also supports recognition of errors made by the trainee is a challenge. The first step in developing such a system consists of error identification in the real procedure, in order to ensure that the training environment covers the most significant errors that can occur. This paper focuses on identifying the main system requirements for an interactive simulator for training bilateral sagittal split osteotomy (BSSO). An approach is proposed based on failure mode and effects analysis (FMEA), a risk analysis method that is well structured and already an approved technique in other domains. Based on the FMEA results, a BSSO training simulator is currently being developed, which centers upon the main critical steps of the procedure (sawing and splitting) and their main errors. FMEA seems to be a suitable tool in the design phase of developing medical simulators. Herein, it serves as a communication medium for knowledge transfer between the medical experts and the system developers. The method encourages a reflective process and allows identification of the most important elements and scenarios that need to be trained.
VisNEST – Interactive Analysis of Neural Activity Data
The aim of computational neuroscience is to gain insight into the dynamics and functionality of the nervous system by means of modeling and simulation. Current research leverages the power of High Performance Computing facilities to enable multi-scale simulations capturing both low-level neural activity and large-scale interactions between brain regions. In this paper, we describe an interactive analysis tool that enables neuroscientists to explore data from such simulations. One of the driving challenges behind this work is the integration of macroscopic data at the level of brain regions with microscopic simulation results, such as the activity of individual neurons. While researchers validate their findings mainly by visualizing these data in a non-interactive fashion, state-of-the-art visualizations, tailored to the scientific question yet sufficiently general to accommodate different types of models, enable such analyses to be performed more efficiently. This work describes several visualization designs, conceived in close collaboration with domain experts, for the analysis of network models. We primarily focus on the exploration of neural activity data, inspecting connectivity of brain regions and populations, and visualizing activity flux across regions. We demonstrate the effectiveness of our approach in a case study conducted with domain experts.
An Evaluation of Two Simple Methods for Representing Heaviness in Immersive Virtual Environments
Weight perception in virtual environments generally can be achieved with haptic devices. However, most of these are hard to integrate in an immersive virtual environment (IVE) due to their technical complexity and the restriction of a user's movement within the IVE. We describe two simple methods using only a wireless light-weight finger-tracking device in combination with a physics simulated hand model to create a feeling of heaviness of virtual objects when interacting with them in an IVE. The first method maps the varying distance between tracked fingers and the thumb to the grasping force required for lifting a virtual object with a given weight. The second method maps the detected intensity of finger pinch during grasping gestures to the lifting force. In an experiment described in this paper we investigated the potential of the proposed methods for the discrimination of heaviness of virtual objects by finding the just noticeable difference (JND) to calculate the Weber fraction. Furthermore, the workload that users experienced using these methods was measured to gain more insight into their usefulness as interaction technique. At a hit ratio of 0.75, the determined Weber fraction using the finger distance based method was 16.25% and using the pinch based method was 15.48%, which corresponds to values found in related work. There was no significant effect of method on the difference threshold measured and the workload experienced, however the user preference was higher for the pinch based method. The results demonstrate the capability of the proposed methods for the perception of heaviness in IVEs and therefore represent a simple alternative to haptics based methods.
Research Challenges for Visualization Software
Over the last twenty-five years, visualization software has evolved into robust frameworks that can be used for research projects, rapid prototype development, or as the basis of richly featured, end-user tools. In this article, new take stock of current capabilities and describe upcoming challenges facing visualization software in six categories: massive parallelization, emerging processor architectures, application architecture and data management,data models, rendering, and interaction. Further, for each of these categories, we describe evolutionary advances sufficient to meet the visualization software challenge, and posit areas in which revolutionary advances are required
Virtual Air Traffic System Simulation - Aiding the Communication of Air Traffic Effects
A key aspect of air traffic infrastructure projects is the communication between stakeholders during the approval process regarding their environmental impact. Yet, established means of communication have been found to be rather incomprehensible. In this paper we present an application that addresses these communication issues by enabling the exploration of airplane noise emissions in the vicinity of airports in a virtual environment (VE). The VE is composed of a model of the airport area and flight movement data. We combine a real-time 3D auralization approach with visualization techniques to allow for an intuitive access to noise emissions. Specifically designed interaction techniques help users to easily explore and compare air traffic scenarios.
Extended Pie Menus for Immersive Virtual Environments
Pie menus are a well-known technique for interacting with 2D environments and so far a large body of research documents their usage and optimizations. Yet, comparatively little research has been done on the usability of pie menus in immersive virtual environments (IVEs). In this paper we reduce this gap by presenting an implementation and evaluation of an extended hierarchical pie menu system for IVEs that can be operated with a six-degrees-of-freedom input device. Following an iterative development process, we first developed and evaluated a basic hierarchical pie menu system. To better understand how pie menus should be operated in IVEs, we tested this system in a pilot user study with 24 participants and focus on item selection. Regarding the results of the study, the system was tweaked and elements like check boxes, sliders, and color map editors were added to provide extended functionality. An expert review with five experts was performed with the extended pie menus being integrated into an existing VR application to identify potential design issues. Overall results indicated high performance and efficient design.
Poster: Interactive Visualization of Brain-Scale Spiking Activity
In recent years, the simulation of spiking neural networks has advanced in terms of both simulation technology and knowledge about neuroanatomy. Due to these advances, it is now possible to run simulations at the brain scale, which produce an unprecedented amount of data to be analyzed and understood by researchers. As aid, VisNEST, a tool for the combined visualization of simulated spike data and anatomy was developed.
Adaptive Human Motion Prediction using Multiple Model Approaches
A common problem in Virtual Reality is latency. Especially for head tracking, latency can lead to a lower immersion. Prediction can be used to reduce the effect of latency. However, for good results the prediction process has to be reliably fast and accurate. Human motion is not homogeneous and humans often tend to change the way they move. Prediction models can be designed for these special motion types. To combine the special models, a multiple model approach is presented. It constantly evaluates the quality of the different specialized motion prediction and adjusts the set of motion models. We propose two variants, and compare them to a reference prediction algorithm.
Poster Interactive Visualization of Brain Volume Changes
The visual analysis of brain volume data by neuroscientists is commonly done in 2D coronal, sagittal and transversal views, limiting the visualization domain from potentially three to two dimensions. This is done to avoid occlusion and thus gain necessary context information. In contrast, this work intends to benefit from all spatial information that can help to understand the original data. Example data of a patient with brain degeneration are used to demonstrate how to enrich 2D with 3D data. To this end, two approaches are presented. First, a conventional 2D section in combination with transparent brain anatomy is used. Second, the principle of importance-driven volume rendering is adapted to allow a direct line-of-sight to relevant structures by means of a frustum-like cutout.
Poster: Hyperslice Visualization of Metamodels for Manufacturing Processes
In modeling and simulation of manufacturing processes, complex models are used to examine and understand the behavior and properties of the product or process. To save computation time, global approximation models, often referred to as metamodels, serve as surrogates for the original complex models. Such metamodels are difficult to interpret, because they usually have multi-dimensional input and output domains. We propose a hyperslice-based visualization approach, that uses hyperslices in combination with direct volume rendering, training point visualization, and gradient trajectory navigation, that helps in understanding such metamodels. Great care was taken to provide a high level of interactivity for the exploration of the data space.
Physically Based Rendering of the Martian Atmosphere
With the introduction of complex precomputed scattering tables by Bruneton in 2008, the quality of visualizing atmospheric scattering vastly improved. The presented algorithms allowed for the rendering of complex atmospheric features such as multiple-scattering or light shafts in real-time and at interactive framerates. While their published implementation corresponding to the publication was merely a proof of concept, we present a more practical approach by applying their scattering theory to an already existing planetary rendering engine. Because the commonly used set of parameters only describes the atmosphere of the Earth, we further extend the scattering formulation to visualize the atmosphere of the planet Mars. Validating the modified scattering and resulting parameters is then done by comparison with available imagery from the Martian atmosphere
Comparing Auditory and Haptic Feedback for a Virtual Drilling Task
While visual feedback is dominant in Virtual Environments, the use of other modalities like haptics and acoustics can enhance believability, immersion, and interaction performance. Haptic feedback is especially helpful for many interaction tasks like working with medical or precision tools. However, unlike visual and auditory feedback, haptic reproduction is often difficult to achieve due to hardware limitations. This article describes a user study to examine how auditory feedback can be used to substitute haptic feedback when interacting with a vibrating tool. Participants remove some target material with a round-headed drill while avoiding damage to the underlying surface. In the experiment, varying combinations of surface force feedback, vibration feedback, and auditory feedback are used. We describe the design of the user study and present the results, which show that auditory feedback can compensate the lack of haptic feedback.
@inproceedings {EGVE:JVRC12:049-056,
booktitle = {Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts},
title = {{Comparing Auditory and Haptic Feedback for a Virtual Drilling Task}},
author = {Rausch, Dominik and Aspöck, Lukas and Knott, Thomas and Pelzer, Sönke and Vorländer, Michael and Kuhlen, Torsten},
year = {2012},
publisher = {The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {10.2312/EGVE/JVRC12/049-056}
pages= { -- }
}
Geometrical-Acoustics-based Ultrasound Image Simulation
Brightness modulation (B-Mode) ultrasound (US) images are used to visualize internal body structures during diagnostic and invasive procedures, such as needle insertion for Regional Anesthesia. Due to patient availability and health risks-during invasive procedures-training is often limited, thus, medical training simulators become a viable solution to the problem. Simulation of ultrasound images for medical training requires not only an acceptable level of realism but also interactive rendering times in order to be effective. To address these challenges, we present a generative method for simulating B-Mode ultrasound images using surface representations of the body structures and geometrical acoustics to model sound propagation and its interaction within soft tissue. Furthermore, physical models for backscattered, reflected and transmitted energies as well as for the beam profile are used in order to improve realism. Through the proposed methodology we are able to simulate, in real-time, plausible view- and depth-dependent visual artifacts that are characteristic in B-Mode US images, achieving both, realism and interactivity.
Poster: VisNEST - Interactive Analysis of Neural Activity Data
Modeling and simulating a brain’s connectivity produces an immense
amount of data, which has to be analyzed in a timely fashion.
Neuroscientists are currently modeling parts of the brain –
e.g. the visual cortex – of primates like Macaque monkeys in order
to deduce functionality and transfer newly gained insights to
the human brain. Current research leverages the power of today’s
High Performance Computing (HPC) machines in order to simulate
low level neural activity. In this paper, we describe an interactive
analysis tool that enables neuroscientists to visualize the resulting
simulation output. One of the driving challenges behind our development
is the integration of macroscopic data, e.g. brain areas, with
microscopic simulation results, e.g. spiking behavior of individual neurons.
Honorable Mention!
CAVIR: Correspondence Analysis in Virtual Reality. Ways to a Valid Interpretation of Correspondence Analytical Point Clouds in Virtual Environments
Correspondence Analysis (CA) is frequently used to interpret correlations between categorical variables in the area of market research. To do so, coherences of variables are converted to a three-dimensional point cloud and plotted as three different 2D-mappings. The major challenge is to correctly interpret these plottings. Due to a missing axis, distances can easily be under- or overestimated. This can lead to a misclustering and misinterpretation of data and thus to faulty conclusions. To address this problem we present CAVIR, an approach for CA in Virtual Reality. It supports users with a virtual three-dimensional representation of the point cloud and different options to show additional information, to measure Euclidean distances, and to cluster points. Besides, the free rotation of the entire point cloud enables the CA user to always have a correct view of the data.
@Article{Graff2012,
Title = {{CAVIR}: {C}orrespondence {A}nalysis in {V}irtual {R}eality. {W}ays to a {V}alid {I}nterpretation of {C}orrespondence {A}nalytical {P}oint {C}louds in {V}irtual {E}nvironments},
Author = {Frederik Graff and Andrea B\"{o}nsch and Daniel B\"{u}ndgens and Torsten Kuhlen},
Journal = {{C}onference {P}roceedings: {I}nternational {M}asaryk {C}onference for {P}h.{D}. {S}tudents and {Y}oung {R}esearchers},
Year = {2012},
Pages = {653-662},
Volume = {3},
Url = {http://www.vedeckekonference.cz/library/proceedings/mmk_2012.pdf}
}
CAVIR: Correspondence Analysis in Virtual Reality
Correspondence Analysis (CA) is used to interpret correlations between categorical variables in the areas of social science and market research. To do so, coherences of variables are converted to a three-dimensional point cloud and plotted as several different 2D-mappings, each containing two axes. The major challenge is to correctly interpret these plottings. Due to a missing axis, distances can easily be under- or overestimated. This can lead to a misinterpretation and thus a misclustering of data.
To address this problem we present CAVIR, an approach for CA in Virtual Reality. It supports users with a three-dimensional representation of the point cloud and different options to show additional information, to measure Euclidean distances, and to cluster points. Besides, the motion parallax and a free rotation of the entire point cloud enable the CA expert to always have a correct view of the data.
Best Presentation Award!
@Article{Boensch2012,
Title = {{CAVIR}: {C}orrespondence {A}nalysis in {V}irtual {R}eality},
Author = {Andrea B\"{o}nsch and Frederik Graff and Daniel B\"{u}ndgens and Torsten Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 9. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2012},
Pages = {49-60},
ISSN = {978-3-8440-1309-2}
Publisher = {Shaker Verlag},
}
Visualizing Acoustical Simulation Data in Immersive Virtual Environments
In this contribution, we present an immersive visualization of room acoustical simulation data. In contrast to the commonly employed external viewpoint, our approach places the user inside the visualized data. The main problem with this technique is the occlusion of some data points by others. We present different solutions for this problem that allow an interactive analysis of the simulation data.
Bimanual Haptic Simulator for Medical Training: System Architecture and Performance Measurements
In this paper we present a simulator for two-handed haptic interaction. As an application example, we chose a medical scenario that requires simultaneous interaction with a hand and a needle on a simulated patient. The system combines bimanual haptic interaction with a physics-based soft tissue simulation. To our knowledge the combination of finite element methods for the simulation of deformable objects with haptic rendering is seldom addressed, especially with two haptic devices in a non-trivial scenario. Challenges are to find a balance between real-time constraints and high computational demands for fidelity in simulation and to synchronize data between system components. The system has been successfully implemented and tested on two different hardware platforms: one mobile on a laptop and another stationary on a semi-immersive VR system. These two platforms have been chosen to demonstrate scaleability in terms of fidelity and costs. To compare performance and estimate latency, we measured timings of update loops and logged event-based timings of several components in the software.
@inproceedings {EGVE:JVRC11:039-046,
booktitle = {Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {Sabine Coquillart and Anthony Steed and Greg Welch},
title = {{Bimanual Haptic Simulator for Medical Training: System Architecture and Performance Measurements}},
author = {Ullrich, Sebastian and Rausch, Dominik and Kuhlen, Torsten},
year = {2011},
pages={39--46},
publisher = {The Eurographics Association},
DOI = {10.2312/EGVE/JVRC11/039-046}
}
Efficiently Navigating Data Sets Using the Hierarchy Browser
A major challenge in Virtual Reality is to enable users to efficiently explore virtual environments, regardless of prior knowledge. This is particularly true for complex virtual scenes containing a huge amount of potential areas of interest. Providing the user convenient access to these areas is of prime importance, just like supporting her to orient herself in the virtual scene. There exist techniques for either aspect, but combining these techniques into one holistic system is not trivial. To address this issue, we present the Hierarchy Browser. It supports the user in creating a mental image of the scene. This is done by offering a well-arranged, hierarchical visual representation of the scene structure as well as interaction techniques to browse it. Additional interaction allows to trigger a scene manipulation, e.g. an automated travel to a desired area of interest. We evaluate the Hierarchy Browser by means of an expert walkthrough.
@Article{Boensch2011,
Title = {{E}fficiently {N}avigating {D}ata {S}ets {U}sing the {H}ierarchy {B}rowser},
Author = {Andrea B\"{o}nsch and Sebastian Pick and Bernd Hentschel and Torsten Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 8. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2011},
Pages = {37-48},
ISSN = {978-3-8440-0394-9}
Publisher = {Shaker Verlag}
}
Efficient Rasterization for Outdoor Radio Wave Propagation
Conventional beam tracing can be used for solving global illumination problems. It is an efficient algorithm, and performs very well when implemented on the GPU. This allows us to apply the algorithm in a novel way to the problem of radio wave propagation. The simulation of radio waves is conceptually analogous to the problem of light transport. We use a custom, parallel rasterization pipeline for creation and evaluation of the beams. We implement a subset of a standard 3D rasterization pipeline entirely on the GPU, supporting 2D and 3D framebuffers for output. Our algorithm can provide a detailed description of complex radio channel characteristics like propagation losses and the spread of arriving signals over time (delay spread). Those are essential for the planning of communication systems required by mobile network operators. For validation, we compare our simulation results with measurements from a real world network. Furthermore, we account for characteristics of different propagation environments and estimate the influence of unknown components like traffic or vegetation by adapting model parameters to measurements.
3D Sketch Recognition for Interaction in Virtual Environments
We present a comprehensive 3D sketch recognition framework for interaction within Virtual Environments that allows to trigger commands by drawing symbols, which are recognized by a multi-level analysis. It proceeds in three steps: The segmentation partitions each input line into meaningful segments, which are then recognized as a primitive shape, and finally analyzed as a whole sketch by a symbol matching step. The whole framework is configurable over well-defined interfaces, utilizing a fuzzy logic algorithm for primitive shape learning and a textual description language to define compound symbols. It allows an individualized interaction approach that can be used without much training and provides a good balance between abstraction and intuition. We show the real-time applicability of our approach by performance measurements.
@inproceedings {PE:vriphys:vriphys10:115-124,
booktitle = {Workshop in Virtual Reality Interactions and Physical Simulation "VRIPHYS" (2010)},
editor = {Kenny Erleben and Jan Bender and Matthias Teschner},
title = {{3D} Sketch Recognition for Interaction in Virtual Environments},
author = {Rausch, Dominik and Assenmacher, Ingo and Kuhlen, Torsten},
year = {2010},
publisher = {The Eurographics Association},
DOI = {10.2312/PE/vriphys/vriphys10/115-124}
}
Virtual Reality System at RWTH Aachen University
During the last decade, Virtual Reality (VR) systems have progressed from primary laboratory experiments into serious and valuable tools. Thereby, the amount of useful applications has grown in a large scale, covering conventional use, e.g., in science, design, medicine and engineering, as well as more visionary applications such as creating virtual spaces that aim to act real. However, the high capabilities of today’s virtual reality systems are mostly limited to firstclass visual rendering, which directly disqualifies them for immersive applications. For general application, though, VR-systems should feature more than one modality in order to boost its range of applications. The CAVE-like immersive environment that is run at RWTH Aachen University comprises state-of-the-art visualization and auralization with almost no constraints on user interaction. In this article a summary of the concept, the features and the performance of our VR-system is given. The system features a 3D sketching interface that allows controlling the application in a very natural way by simple gestures. The sound rendering engine relies on present-day knowledge of Virtual Acoustics and enables a physically accurate simulation of sound propagation in complex environments, including important wave effects such as sound scattering, airborne sound insulation between rooms and sound diffraction. In spite of this realistic sound field rendering, not only spatially distributed and freely movable sound sources and receivers are supported, but also modifications and manipulations of the environment itself. The auralization concept is founded on pure FIR filtering which is realized by highly parallelized non-uniformly partitioned convolutions. A dynamic crosstalk cancellation system performs the sound reproduction that delivers binaural signals to the user without the need of headphones. The significant computational complexity is handled by distributed computation on PCclusters that drive the simulation in real-time even for huge audio-visual scenarios.
@inproceedings{schroder2010virtual,
title={Virtual reality system at RWTH Aachen University},
author={Schr{\"o}der, Dirk and Wefers, Frank and Pelzer, S{\"o}nke and Rausch, Dominik and Vorl{\"a}nder, Michael and Kuhlen, Torsten},
booktitle={Proceedings of the international symposium on room acoustics (ISRA), Melbourne, Australia},
year={2010}
}
Simulation of Radio Wave Propagation by Beam Tracing
Beam tracing can be used for solving global illumination problems. It is an efficient algorithm, and performs very well when implemented on the GPU. This allows us to apply the algorithm in a novel way to the problem of radio wave propagation. The simulation of radio waves is conceptually analogous to the problem of light transport. However, their wavelengths are of proportions similar to that of the environment. At such frequencies, waves that bend around corners due to diffraction are becoming an important propagation effect. In this paper we present a method which integrates diffraction, on top of the usual effects related to global illumination like reflection, into our beam tracing algorithm. We use a custom, parallel rasterization pipeline for creation and evaluation of the beams. Our algorithm can provide a detailed description of complex radio channel characteristics like propagation losses and the spread of arriving signals over time (delay spread). Those are essential for the planning of communication systems required by mobile network operators. For validation, we compare our simulation results with measurements from a real world network.
Beam Tracing for Multipath Propagation in Urban Environments
We present a novel method for efficient computation of complex channel characteristics due to multipath effects in urban microcell environments. Significant speedups are obtained compared to state-of-the-art ray-tracing algorithms by tracing continuous beams and by using parallelization techniques. We optimize simulation parameters using on-site measurements from real world networks. We formulate the adaption of model parameters as a constrained least-squares problem where each row of the matrix corresponds to one measurement location, and where the columns are formed by the beams that reach the respective location.