Multimodal communication is the modal people communicate with each other. The information transmitted by different modal is in a harmonic manner, but the role each modal plays are different. In order to introduce multimodal communication in cyberspace, we need to analysis the relationship between them, discover their correlation and uniqueness. The main research achievements of the project are as follows:1.Dependence mapping used for transmitting one modal to another. It is well known that synthesizing realistic facial image with vivid expression is important for cyberspace communication and it is also difficult. In order to make the synthesized face looks realistic we noticed that the facial textures and shapes are highly correlated with each other. The mouth appearances, for example, are highly correlated to the shape of the mouth boundary, and the facial texture variations are related to the face shape. Based on the observation we have defined their correlation as the Dependence Mapping and developed a method to find the concrete mapping function. We have used our method successfully to realistic mouth image generation and facial image synthesis with realistic texture variation.2.Comprehensive expression analysis and synthesis. People's facial expression reflects his affective and emotion status in rather complicated manner. From our knowledge the research on expression analysis and synthesis in computer vision is still on the beginning, and there are few research on addressing the relation between the expression and the affective beneath. Motivated by the information contained in the Japanese Female Facial Expression Database we have investigated the quantitative relation between the facial expression and its emotional constitutes, and defined it as the Emotional Mapping Function. In order to extract such mapping function based on a rather small database we use relative quantity to describe facial shape variation and ERI (Expressional Ratio Image) method to describe the texture variation. As a result the emotional function extracted from the small database can be used to synthetic facial expressional image for general persons controlled by the parameters reflecting the inner emotional status.3.Natural Gesture Analysis based on cloth texture information. We realized that the characteristic texture appeared on the human's cloth can be used to extract the arm posture and have developed a method of natural arm posture extraction. From our knowledge the issue have not be addressed before.4.Operational and Symbolic Gesture recognition. In this project both the operational and symbolic gesture recognition method have been investigated. In the operational gesture recognition part a robust image segmentation and feature selection method have been developed. The transient phase between different gesture instructions is handled by a HMM attached by a transient model successfully. In the symbolic gesture recognition research part a robust hand shape extraction and recognition method has been proposed and 26 different kinds of symbols expressed by the hand gesture can be successfully ecognized.5.Clustering method investigation used for space time signal processing. Natural gestures are usually accompanied to the vocal communication while people talk to each other. Since the gesture pattern types are usually limited an automatic clustering method should be developed to analyze natural hand gestures. Since the appearances of the same kind of gesture may various from one example to the other, the clustering method should be examined carefully. In our research we found that selection of the descriptive feature type is vital and hence a novel Hierarchical Clustering method based on different feature measurements has been suggested, and complicated gesture sequence can be clustered successfully. The developed clustering method can be used to general space time signal with nonlinear time variations.6.Multiple Modalities fusion based target tracking. Recently vision based tracking of people's activity has b
本课题研究多模态人机交互技术中的信息处理与融合技术。以口语及手势结合操作虚拟事物为具体研究对象,研究1)不同形式信息间的依存关系,并藉此指导信息的处理过程,以降低信息处理的复杂度,并提高交互信息的鲁棒性。2)人机交互的自学习机制,以使人机交互体现个人化特点。该研究是发展可穿戴计算、虚拟现实等技术的关键技术。
{{i.achievement_title}}
数据更新时间:2023-05-31
路基土水分传感器室内标定方法与影响因素分析
基于多模态信息特征融合的犯罪预测算法研究
惯性约束聚变内爆中基于多块结构网格的高效辐射扩散并行算法
基于非线性接触刚度的铰接/锁紧结构动力学建模方法
多空间交互协同过滤推荐
多模式人机交互中信息融合的理论与方法研究
多模态脑功能信息融合理论和方法
服务机器人灵巧操作的多模态感知融合与人机交互
面向服务机器人的视听感知融合与多模态人机交互关键技术