We permanently offer proposals for bachelor and master thesis projects in all areas across our research activities (see our research areas page) and related subjects which cover most topics in Virtual Reality and Scientific Visualization. The thesis topics are usually specified in cooperation with one of our research assistants and/or Prof. Kuhlen taking into account the student's individual interests and his/her previous knowledge as well as the current research agenda of the Virtual Reality group (e.g. in terms of ongoing academic or industrial cooperations). So if you are interested in a thesis project in Virtual Reality, please contact us. In order to guarantee a successful completion of the thesis, we usually expect our student to have
- taken the "Basic Techniques in Computer Graphics" lecture if you are a bachelor student
- taken the “Virtual Reality” lecture if you are a master student
- a good working knowledge of C++
- or an equivalent qualification.
Computer-controlled, embodied, intelligent virtual agents are increasingly often embedded in various applications to enliven the virtual sceneries. Thereby, conversational virtual agents are of prime importance. To this end, adequate mimics and lip sync is required to show realistic and plausible talking characters. The goal of this bachelor thesis is to enable an effective however easy-to-integrate lip sync in our Unreal projects for text-to-speech input as well as recorded speech.
- Programming experience with C++ and Unreal Engine 4
- Knowledge of reinforcement learning is desirable
- Basic Knowledge of Computer Graphics (i.e., transformations, geometries, lights, cameras, …)
- Ability to self-reliant prioritize and manage assigned tasks within a given timeframe
- Open and transparent communication skills to ensure that we are on the same page
Jonathan Wendt, M.Sc.
Virtual Humans can be embedded into virtual environments to guide the user through scenes and teach or point out interesting areas. Thereby their behavior has a large influence on the authenticity of the virtual environment and the immersion of a user. One important aspect to this behavior is their movement during speech: co-verbal gestures. The goal of this thesis is to design, develop and test a system to generate authentic co-verbal gestures using RNNs, e.g., Long-Short-Term-Memory (LSTM) networks. Training data for these networks will be provided. The system should become part of an already existing larger software suite to embed believable Virtual Humans into our framework.
Prerequisites: Good programming skills in C++; knowledge of Machine Learning Techniques is desirable
Photo: ©USC Institute for Creative Technologies
Jonathan Wendt, M.Sc.