SCENARIO A: Distributed real-time choreographic collaboration

In this scenario a pair of performer and choreographer is collaborating with a pair of remote performer and choreographer. Augmented reality displays are used to collaborate on the development of a piece and for training. The core of the system is the view of the local performer that is augmented with video of the remote performer, and vice versa. In addition to the video stream,  real-time metric are show about the local and remote performer (Figure 2). This scenario requires low-latency and high bandwidth communication across significant distances.



SCENARIO B: Real-time Human to virtual character interaction 

In this scenario a (biological) human is interacting with a virtual human in real-time. Such a system can be used e.g. for the training of social skills, and psychological research at large.



The scenario integrates heterogeneous sensing and data processing with state-of-the art virtual human technology, and psychology and cognitive science grounded control models.

SCENARIO C: Movement base database retrieval

This project will establish a system where users can search and retrieve motion clips from a repository of human gesture and movement.  This repository (database) of movement will be created from both existing data and new data libraries of human gestures and movements (animation and motion capture). The operation of the input module will be intuitive and implemented with existing user electronic products. Users can input a gesture via many common gesture input devices, such as mobile Smartphones or the Kinect system. After the gesture has been captured, an algorithm will be used to characterize the movement into a feature set. This feature set will be used as search parameters for the repository.  The repository will comprise a distributed set of data structures that store mocap and animation clips.  The middleware for this service will make the formats compatible and transparent.