Project M9: Temporal Multimodal Information Processing in the HVA Workspace
PIs: Prof. Sandra Hirche, Dr. Zhuanghua Shi, Prof. Hermann J. Müller
In Haptic-visual-auditory (HVA) workspace different processing and communication times for multi-modal data often lead to cross-modal asynchrony and thus to a temporal inconsistent representation. However, In order to make the human operator experience the situation as if he/she were directly present and acting in the remote environment, multiple sources of sensory information must be presented so as to provide him/her with the perception of coherence. To date there are hardly any results for the influence of temporal inconsistencies on the transparency of the telepresence system and on the coherent perception of the multi-modal integration, especially in the connection with the haptic modality. Therefore, the focus of the project is to investigate the temporal multi-modal integration capacity of the human operator.
One line of this research is to systematically exam the psychophysical phenomena related to the cross-modal temporal integration. Based on the behavioral results, another line of the research is to build a human perception model for the cross-modal temporal integration, so that the multi-modal communication protocols and the scheduling of local augmenting measures can be evaluated in a HVA workspace. As a consequence, it may act as guideline for multi-modal telepresence system design.