Mixed reality interactive storytelling : acting with gestures and facial expressions
This thesis aims to answer the following question : “How could gestures and facial expressions be used to control the behavior of an interactive entertaining application?”. An answer to this question is presented and illustrated in the context of mixed reality interactive storytelling.
The first part focuses on the description of the Artificial Intelligence (AI) mechanisms that are used to model and control the behavior of the application. We present an efficient real-time hierarchical planning engine, and show how active modalities (such as intentional gestures) and passive modalities (such as facial expressions) can be integrated into the planning algorithm, in such a way that the narrative (driven by the behavior of the virtual characters inside the virtual world) can effectively evolve in accordance with user interactions.
The second part is devoted to the automatic recognition of user interactions. After briefly describing the implementation of a simple but robust rule-based gesture recognition system, the emphasis is set on facial expression recognition. A complete solution integrating state-of-the-art techniques along with original contributions is drawn. It includes face detection, facial feature extraction and analysis. The proposed approach combines statistical learning and probabilistic reasoning in order to deal with the uncertainty associated with the process of modeling facial expressions.
School:Université catholique de Louvain
Source Type:Master's Thesis
Keywords:interactive storytelling online planning pattern recognition gesture mixed reality artificial intelligence facial expression
Date of Publication:05/04/2007