Dr.-Ing. Benjamin Weyers|
Phone: +49 241 80 24920
Fax: +49 241 80 22134
Dissertation: Reconfiguration of User Interface Models for Monitoring and Control of Human-Computer Systems, Universität Duisburg Essen, 2012
Interactive analysis of 3D relational data is challenging. A common way of representing such data are node-link diagrams as they support analysts in achieving a mental model of the data. However, naïve 3D depictions of complex graphs tend to be visually cluttered, even more than in a 2D layout. This makes graph exploration and data analysis less efficient. This problem can be addressed by edge bundling. We introduce a 3D cluster-based edge bundling algorithm that is inspired by the force-directed edge bundling (FDEB) algorithm [Holten2009] and fulfills the requirements to be embedded in an interactive framework for spatial data analysis. It is parallelized and scales with the size of the graph regarding the runtime. Furthermore, it maintains the edge’s model and thus supports rendering the graph in different structural styles. We demonstrate this with a graph originating from a simulation of the function of a macaque brain.
To avoid simulator sickness and improve presence in immersive virtual environments (IVEs), high frame rates and low latency are required. In contrast, volume rendering applications typically strive for high visual quality that induces high computational load and, thus, leads to low frame rates. To evaluate this trade-off in IVEs, we conducted a controlled user study with 53 participants. Search and count tasks were performed in a CAVE with varying volume rendering conditions which are applied according to viewer position updates corresponding to head tracking. The results of our study indicate that participants preferred the rendering condition with continuous adjustment of the visual quality over an instantaneous adjustment which guaranteed for low latency and over no adjustment providing constant high visual quality but rather low frame rates. Within the continuous condition, the participants showed best task performance and felt less disturbed by effects of the visualization during movements. Our findings provide a good basis for further evaluations of how to accelerate volume rendering in IVEs according to user’s preferences.
When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.
Data annotation finds increasing use in Virtual Reality applications with the goal to support the data analysis process, such as architectural reviews. In this context, a variety of different annotation systems for application to immersive virtual environments have been presented. While many interesting interaction designs for the data annotation workflow have emerged from them, important details and evaluations are often omitted. In particular, we observe that the process of handling metadata to interactively create and manage complex annotations is often not covered in detail. In this paper, we strive to improve this situation by focusing on the design of data annotation workflows and their evaluation. We propose a workflow design that facilitates the most important annotation operations, i.e., annotation creation, review, and modification. Our workflow design is easily extensible in terms of supported annotation and metadata types as well as interaction techniques, which makes it suitable for a variety of application scenarios. To evaluate it, we have conducted a user study in a CAVE-like virtual environment in which we compared our design to two alternatives in terms of a realistic annotation creation task. Our design obtained good results in terms of task performance and user experience.
Provenance tracking for visual analysis workflows is still a challenge as especially interaction and collaboration aspects are poorly covered in existing realizations. Therefore, we propose a first prototype addressing these issues based on the PROV model. Interactions in multiple applications by multiple users can be tracked by means of a web interface and, thus, allowing even for tracking of remote-located collaboration partners. In the end, we demonstrate the applicability based on two use cases and discuss some open issues not addressed by our implementation so far but that can be easily integrated into our architecture.
Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet.
Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion.
Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.
Honorable Mention for Best Technote!
When traveling virtually through large scenes, long distances and different detail densities render fixed movement speeds impractical. However, to manually adjust the travel speed, users have to control an additional parameter, which may be uncomfortable and requires cognitive effort. Although automatic speed adjustment techniques exist, many of them can be problematic in indoor scenes. Therefore, we propose to automatically adjust travel speed based on viewpoint quality, originally a measure of the informativeness of a viewpoint. In a user study, we show that our technique is easy to use, allowing users to reach targets faster and use less cognitive resources than when choosing their speed manually.
To use the full potential of immersive data analysis when wearing a head-mounted display, users have to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst’s usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best. In the given task the participants had to determine the shortest path between various pairs of vertices in a large 3D graph.
In the simulation of multi-component systems, we often encounter a problem with a lack of ground-truth data. This situation makes the validation of our simulation methods and models a difficult task. In this work we present a guideline to design validation methodologies that can be applied to the validation of multi-component simulations that lack of ground-truth data. Additionally we present an example applied to an Ultrasound Image Simulation for medical training and give an overview of the considerations made and the results for each of the validation methods. With these guidelines we expect to obtain more comparable and reproducible validation results from which other similar work can benefit.
Workflows for the acquisition and analysis of data in the natural sciences exhibit a growing degree of complexity and heterogeneity, are increasingly performed in large collaborative efforts, and often require the use of high-performance computing (HPC). Here, we explore the reasons for these new challenges and demands and discuss their impact, with a focus on the scientific domain of computational neuroscience. We argue for the need for software platforms integrating HPC systems that allow scientists to construct, comprehend and execute workflows composed of diverse processing steps using different tools. As a use case we present a concrete implementation of such a complex workflow, covering diverse topics such as HPC-based simulation using the NEST software, access to the SpiNNaker neuromorphic hardware platform, complex data analysis using the Elephant library, and interactive visualizations. Tools are embedded into a web-based software platform under development by the Human Brain Project, called Collaboratory. On the basis of this implementation, we discuss the state-of-the-art and future challenges in constructing large, collaborative workflows with access to HPC resources.
Interactive visual data analysis is a well-established class of methods to gather knowledge from raw and complex data. A broad variety of examples can be found in literature presenting its applicability in various ways and different scientific domains. However, fully fledged solutions for visual analysis addressing learning analytics are still rare. Therefore, this paper will discuss visual and interactive data analysis for learning analytics by presenting best practices followed by a discussion of a general architecture combining interactive visualization employing the Information Seeking Mantra in conjunction with the paradigm of coordinated multiple views. Finally, by presenting a use case for ubiquitous learning analytics its applicability will be demonstrated with the focus on temporal and spatial relation of learning data. The data is gathered from a ubiquitous learning scenario offering information for students to identify learning partners and provides information to teachers enabling the adaption of their learning material.
To use the full potential of immersive data analysis when wearing a head-mounted display, the user has to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst’s usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best within the given task.
Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.
When learning ultrasound (US) imaging, trainees must learn how to recognize structures, interpret textures and shapes, and simultaneously register the 2D ultrasound images to their 3D anatomical mental models. Alleviating the cognitive load imposed by these tasks should free the cognitive resources and thereby improve the learning process. We argue that the amount of cognitive load that is required to mentally rotate the models to match the images to them is too large and therefore negatively impacts the learning process. We present a 3D visualization tool that allows the user to naturally move a 2D slice and navigate around a 3D anatomical model. The slice is displayed in-place to facilitate the registration of the 2D slice in its 3D context. Two duplicates are also shown externally to the model; the first is a simple rendered image showing the outlines of the structures and the second is a simulated ultrasound image. Haptic cues are also provided to the users to help them maneuver around the 3D model in the virtual space. With the additional display of annotations and information of the most important structures, the tool is expected to complement the available didactic material used in the training of ultrasound procedures.
The knowledge of which places in a virtual environment are interesting or informative can be used to improve user interfaces and to create virtual tours. Viewpoint Quality Estimation algorithms approximate this information by calculating quality scores for viewpoints. However, even though several such algorithms exist and have also been used, e.g., in virtual tour generation, they have never been comparatively evaluated on virtual scenes. In this work, we introduce three new Viewpoint Quality Estimation algorithms, and compare them against each other and six existing metrics, by applying them to two different virtual scenes. Furthermore, we conducted a user study to obtain a quantitative evaluation of viewpoint quality. The results reveal strengths and limitations of the metrics on actual scenes, and provide recommendations on which algorithms to use for real applications.
The UN Convention on the Rights of Persons with Disabilities Article 24 states that “States Parties shall ensure inclusive education at all levels of education and life long learning.” This article focuses on the inclusion of people with visual impairments in learning processes including complex table-based data. Gaining insight into and understanding of complex data is a highly demanding task for people with visual impairments. Especially in the case of table-based data, the classic approaches of braille-based output devices and printing concepts are limited. Haptic perception requires sequential information processing rather than the parallel processing used by the visual system, which hinders haptic perception to gather a fast overview of and deeper insight into the data. Nevertheless, neuroscientific research has identified great dependencies between haptic perception and the cognitive processing of visual sensing. Based on these findings, we developed a haptic 3D surface representation of classic diagrams and charts, such as bar graphs and pie charts. In a qualitative evaluation study, we identified certain advantages of our relief-type 3D chart approach. Finally, we present an education model for German schools that includes a 3D printing approach to help integrate students with visual impairments./citation.cfm?id=2700433
Modern localization techniques are based on the Global Positioning System (GPS). In general, the accuracy of the measurement depends on various uncertain parameters. In addition, despite its relevance, a number of localization approaches fail to consider the modeling of uncertainty in geographic information system (GIS) applications. This paper describes a new verified method for uncertain (GPS) localization for use in GPS and GIS application scenarios based on Dempster-Shafer theory (DST), with two-dimensional and interval-valued basic probability assignments. The main benefit our approach offers for GIS applications is a workflow concept using DST-based models that are embedded into an ontology-based semantic querying mechanism accompanied by 3D visualization techniques. This workflow provides interactive means of querying uncertain GIS models semantically and provides visual feedback.
In this work, we present an approach for tracking the feet of multiple users in CAVE-like systems with under-floor projection. It is based on low-cost consumer cameras, does not require users to wear additional equipment, and can be installed without modifying existing components. If the brightness of the floor projection does not contain too much variation, the feet of several people can be successfully and precisely tracked and assigned to individuals. The tracking data can be used to enable or enhance user interfaces like Walking-in-Place or torso-directed steering, provide audio feedback for footsteps, and improve the immersive experience for multiple users.
In contrast to the wide-spread use of 6-DOF pointing devices, freehand user interfaces in Immersive Virtual Environments (IVE) are non-intrusive. However, for gesture interfaces, the definition of trigger signals is challenging. The use of mechanical devices, dedicated trigger gestures, or speech recognition are often used options, but each comes with its own drawbacks. In this paper, we present an alternative approach, which allows to precisely trigger events with a low latency using microphone input. In contrast to speech recognition, the user only blows into the microphone. The audio signature of such blow events can be recognized quickly and precisely. The results of a user study show that the proposed method allows to successfully complete a standard selection task and performs better than expected against a standard interaction device, the Flystick.
Die Vermeidung von Bedienfehlern ist gerade in sicherheitskritischen Systemen von zentraler Bedeutung. Um das Wiedererinnern an einmal erlernte Fähigkeiten für das Bedienen und Steuern technischer Systeme zu erleichtern und damit Fehler zu vermeiden, werden sogenannte Refresher Interventionen eingesetzt. Hierbei handelt es sich bisher zumindest um aufwändige Simulations- oder Simulationstrainings, die bereits erlernte Fähigkeiten durch deren wiederholte Ausführung auffrischen und so in selten auftretenden kritischen Situationen korrekt abrufbar machen. Die vorliegende Arbeit zeigt wie das Ziel des Wiedererinnerns auch ohne Refresher in Form einer Gaze Guiding Komponente erreicht werden kann, die in eine visuelle Benutzerschnittstelle zur Bedienung des technischen Prozesses eingebettet wird und den Fertigkeitsabruf durch gezielte kontextabhängige Ein- und Überblendungen unterstützt. Die Wirkung dieses Konzepts wird zurzeit in einer größeren DFG-geförderten Studie untersucht.
Various conceptual approaches for the creation and presentation of virtual museums can be found. However, less work exists that concentrates on collaboration in virtual museums. The support of collaboration in virtual museums provides various benefits for the visit as well as the preparation and creation of virtual exhibits. This paper addresses one major problem of collaboration in virtual museums: the awareness of visitors. We use a Cave Automated Virtual Environment (CAVE) for the visualization of generated virtual museums to offer simple awareness through co-location. Furthermore, the use of smartphones during the visit enables the visitors to create comments or to access exhibit related metadata. Thus, the main contribution of this ongoing work is the presentation of a workflow that enables an integrated deployment of generic virtual museums into a CAVE, which will be demonstrated by deploying the virtual Leopold Fleischhacker Museum.
This paper gives an overview on crowdsourcing practices in virtual mu-seums. Engaged nonprofessionals and specialists support curators in creating digi-tal 2D or 3D exhibits, exhibitions and tour planning and enhancement of metadata using the Virtual Museum and Cultural Object Exchange Format (ViMCOX). ViMCOX provides the semantic structure of exhibitions and complete museums and includes new features, such as room and outdoor design, interactions with artwork, path planning and dissemination and presentation of contents. Applica-tion examples show the impact of crowdsourcing in the Museo de Arte Contem-poraneo in Santiago de Chile and in the virtual museum depicting the life and work of the Jewish sculptor Leopold Fleischhacker. A further use case is devoted to crowd-based support for restoration of high-quality 3D shapes.
This workshop aims to gather active researchers and practitioners in the field of formal methods in the context of user interfaces, interaction techniques, and interactive systems. The main objectives are to look at the evolutions of the definition and use of formal methods for interactive systems since the last book on the field nearly 20 years ago and also to identify important themes for the next decade of research. The goals of this workshop are to offer an exchange platform for scientists who are interested in the formal modeling and description of interaction, user interfaces, and interactive systems and to discuss existing formal modeling methods in this area of conflict. Participants will be asked to present their perspectives, concepts, and techniques for formal modeling along one of two case studies – the control of a nuclear power plant and an air traffic management arrival manager.
This paper presents the Formal Interaction Logic Language (FILL) as modeling approach for the description of user interfaces in an executable way. In the context of the workshop on Formal Methods in Human Computer Interaction, this work presents FILL by first introducing its architectural structure, its visual representation and transformation of reference nets, a special type of Petri nets, and finally discussing FILL in context of two use case proposed by the workshop. Therefore, this work shows how FILL can be used to model automation as part of the user interface model as well as how formal reconfiguration can be used to implement user-based automation given a formal user interface model.
The 6th annual IEEE 3DUI contest focuses on Virtual Music Instruments (VMIs), and on 3D user interfaces for playing them. The Contest is part of the IEEE 2015 3DUI Symposium held in Arles, France. The contest is open to anyone interested in 3D User Interfaces (3DUIs), from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. Due to the recent explosion of affordable and portable 3D devices, this year's contest will be judged live at 3DUI. The judgment will be done by selected 3DUI experts during on-site presentation during the conference. Therefore, contestants are required to bring their systems for live judging and for attendees to experience them.
In this work, we report on a pilot study we conducted, and on a study design, to examine the effects and applicability of rotation gain in CAVE-like virtual environments. The results of the study will give recommendations for the maximum levels of rotation gain that are reasonable in algorithms for enlarging the virtual field of regard or redirected walking.
Bisher wird die Mensch Maschine Schnittstelle (MMS) für Prüf- und Diagnosesoftware für die Automobilindustrie von Drittherstellern für spezifische Prüfprozesse nach uneinheitlichen und unsystematischen Maßgaben erstellt und zusammen mit entsprechender Hardware vermarktet. Anhand einer durchgeführten Vorstudie in 2012 bei einem Automobilhersteller wurde gezeigt, dass diese Maß-gaben leider eher selten die gewünschten Anforderungen an eine menschgerechte und benutzer-freundliche Gestaltung erfüllen. In dieser Arbeit soll anhand einer bekannten Methode neue Anforderungen an die MMS beim Diagnose- und Prüfbereich am Fließband in Hinblick auf Usability und Human Factors untersucht werden.
In the recent past, efforts have been made to adopt immersive virtual reality (IVR) systems as a means for design reviews in factory layout planning. While several solutions for this scenario have been developed, their integration into existing planning workflows has not been discussed yet. From our own experience of developing such a solution, we conclude that the use of IVR systems-like CAVEs-is rather disruptive to existing workflows. One major reason for this is that IVR systems are not available everywhere due to their high costs and large physical footprint. As a consequence, planners have to travel to sites offering such systems which is especially prohibitive as planners are usually geographically dispersed. In this paper, we present a concept for integrating IVR systems into the factory planning process by means of a 3D collaborative virtual environment (3DCVE) without disrupting the underlying planning workflow. The goal is to combine non-immersive and IVR systems to facilitate collaborative walkthrough sessions. However, this scenario poses unique challenges to interactive collaborative work that to the best of our knowledge have not been addressed so far. In this regard, we discuss approaches to viewpoint sharing, telepointing and annotation support that are geared towards distributed heterogeneous 3DCVEs.
This paper presents a software framework that supports the development of haptic-enabled physics simulations. The framework provides tools aiming to facilitate a fast prototyping process by utilizing component and flow-oriented architectures, while maintaining the capability to create efficient code which fulfills the performance requirements induced by the target applications. We argue that such a framework should not only ease the creation of prototypes but also help to effectively and efficiently evaluate them. To this end, we provide analysis tools and the possibility to build problem oriented evaluation environments based on the described software concepts. As motivating use case, we present a project with the goal to develop a haptic-enabled medical training simulator for a maxillofacial procedure. With this example, we demonstrate how the described framework can be used to create a simulation architecture for a complex haptic simulation and how the tools assist in the prototyping process.
The Human Brain Project is one of the largest scientific initiatives dedicated to the research of the human brain worldwide. Over 80 research groups from a broad variety of scientific areas, such as neuroscience, simulation science, high performance computing, robotics, and visualization work together in this European research initiative. This work at hand will identify certain chances and challenges for cognitive systems engineering resulting from the HBP research activities. Beside the main goal of the HBP gathering deeper insights into the structure and function of the human brain, cognitive system research can directly benefit from the creation of cognitive architectures, the simulation of neural networks, and the application of these in context of (neuro-)robotics. Nevertheless, challenges arise regarding the utilization and transformation of these research results for cognitive systems, which will be discussed in this paper. Tools necessary to cope with these challenges are visualization techniques helping to understand and gain insights into complex data. Therefore, this paper presents a set of visualization techniques developed at the Virtual Reality Group at the RWTH Aachen University.
Start-up procedures are non-routine tasks in process control, which are prone to skill decay. To mitigate skill decay, job performance aids (JPA) are used. A dynamic JPA was developed for the human-computer interface, which guides the gaze of the user by visual cues. Two gaze-guiding applications (spotlight, spotlight plus integration of information) were tested in terms of their impact in supporting the start-up performance after a three-week and six-month period of non-use (N = 46). Irrespective of the gaze-guiding format, participants with three weeks of non-use outperform participants with six months of non-use. Results show that the best indicator of skill loss is the length of the period of non-use. It is assumed that the advantage of gaze guiding plus integration of information was not shown due to the loss of interface-operating knowledge. In conclusion, gaze guiding for long periods of non-use needs to be accompanied by interface-operating knowledge.
Adaptable user interfaces (UI) have shown a great variety of advantages in human computer interaction compared to classic UI designs. We show how adaptable UIs can be built by introducing coloured Petri nets to connect the UI’s physical representation with the system to be controlled. UI development benefits from formal modelling approaches regarding the derived close integration of creation, execution, and reconfiguration of formal UI models. Thus, adaptation does not only change the physical representation, but also the connecting Petri net. For the latter transformation, we enhance the DPO rewriting formalism by using an order on the set of labels and softening the label-preserving property of morphisms, i.e., an element can also be mapped to another element if the label is larger. We use lattices to ensure correctness and state application conditions of rewriting steps. Finally we define an order compatible with our framework for the use in our implementation.
Immersive virtual environments (IVEs) are an appropriate platform for 3D data visualization and exploration as, for example, the spatial understanding of these data is facilitated by stereo technology. However, in comparison to desktop setups a lower latency and thus a higher frame rate is mandatory. In this paper we argue that current realizations of direct volume rendering do not allow for a desirable visualization w.r.t. latency and visual quality that do not impair the immersion in virtual environments. To this end, we analyze published acceleration techniques and discuss their potential in IVEs; furthermore, head tracking is considered as a main challenge but also a starting point for specific optimization techniques.
The available memory bandwidth of existing high performance computing platforms turns out as being more and more the limitation to various applications. Therefore, modern microarchitectures integrate the memory controller on the processor chip, which leads to a non-uniform memory access behavior of such systems. This access behavior in turn entails major challenges in the development of shared memory parallel applications. An improperly implemented memory access functionality results in a bad ratio between local and remote memory access, and causes low performance on such architectures. To address this problem, the developers of such applications rely on tools to make these kinds of performance problems visible. This work presents a new tool for the visualization of performance data of the non-uniform memory access behavior. Because of the visual design of the tool, the developer is able to judge the severity of remote memory access in a time-dependent simulation, which is currently not possible using existing tools.
In this paper, we present a first approach to a new test and diagnosis user interface (TDUI) as an application scenario for ambient intelligent systems in production environments, which represents one specific domain for urban areas. This method is based on a first evaluation study in 2012, identifying relevant requirements for the development of such TDUIs. It is planned to be implemented based on a formally defined Interaction Logic Layer (ILL), which connects the physical user interface layout with the test system. It allows the user to easily control and adapt the test and diagnosis application, to display and convey information characterizing the process state. Finally, this approach makes the TDUI implementation highly flexible and adaptive regarding the individual user and the requirements of the production environment.
This paper describes a metadata-based tour recommendation approach for on-site museum and outdoor visits. Recommendations are calculated utilizing spatial exhibit distribution, museum layout, user and navigation profiles, additional constraints as well as art work descriptors. We propose a service-oriented architecture for generating 3D virtual museums as well as a WebSocket layer to process sensor data during the museum visit.
A methodology for evaluating the ViMCOX metadata format for designing virtual museums is discussed. Two evaluation approaches are presented, addressing (1) design aspects of virtual museums, the completeness of the metadata from the visitor’s point of view as well as measuring the acceptance of virtual museums and (2) a qualitative survey in collaboration with museum experts to identify metadata requirements and feature sets for curator tool implementations.
Research in the field of Ambient Assisted Living (AAL) introduces assistive technologies in various contexts, e.g., medical applications for clinics, or activities in home environments. Assistive technologies have the potential to help prevent isolation and lack of social support arising from a growing number of older adults living in single person households in urban areas. In order to examine the role of such technologies and identify challenges and potentials of systems and services, we investigated existing literature of the past decade. Whereas many contributions support the individual user or social connection, only few integrate location-based information or urban structures. Future research is confronted with a multitude of challenges, e.g. considering network technologies, market uptake or adaptation, and potentials, e.g. the support or establishing of social neighborhood structures. This literature review contributes in achieving an overview of the state of the art and research areas that will draw an increasing focus according to demographic change.
Inefficient and error-prone interaction between human operators and technical systems was the reason for various catastrophic accidents in the past. User interfaces implement the communication between a human user and a technical system which is the reason why inaccurate design of user interfaces has been identified as one major factor for those errors. The use of adaptive user interfaces is one possible solution to reduce inefficient interaction by adapting the user interface to a specific user, task, or context. However, currently no self-contained formal approach exists that allows for the creation of adaptive user interfaces despite various advantages of formal methods: interaction becomes verifiable, formal methods close the gap between modeling and implementation by using executable formal languages, and they allow for using existing rewriting concepts making formal models adaptable. This paper introduces a new approach to a formal rule generation concept, which enables a flexible creation of adaptive user interfaces. This concept is based on a formal modeling and reconfiguration approach for the creation and adaptation of user interfaces. The applicability of this approach will be shown through an implementation of an adaptive user interface for adaptive automation. The main contribution of the presented work is a new concept for rule generation that is capable of adapting formally modeled user interfaces.
Graphs play an important role in data analysis. Especially, graphs with a natural spatial embedding can benefit from a 3D visualization. But even more then in 2D, graphs visualized as intuitively readable 3D node-link diagrams can become very cluttered. This makes graph exploration and data analysis difficult. For this reason, we focus on the challenge of reducing edge clutter by utilizing edge bundling. In this paper we introduce a parallel, edge cluster based accelerator for the force-directed edge bundling algorithm presented in [Holten2009]. This opens up the possibility for user interaction during and after both the clustering and the bundling.