Multimodal emotion recognition is an emerging research direction in the field of emotional computing and human-computer interaction, which plays a vital role in achieving more efficient and intelligent human-computer interaction experience. Most of the current multimodal emotion recognition methods are based on the external information such as facial expressions, speech signals, body postures etc. Such information is susceptible to illumination changes and environmental noise, and can be consciously controlled and deceived. Due to the intrinsic relation with human emotion, physiological signals can reflect the changes of human emotional state more realistically and objectively than the external information. Therefore, this project aims to combine the physiological characteristics of good characteristics in the representation of emotions and multi-modal emotion recognition theory, and carry out multi-modal physiological signal fusion research for dimensional emotion recognition. To overcome the key problems of combining physiological signals and multimodal emotion recognition, we first construct the specific feature fusion model to represent the single-modal multi-channel signal, which can remove the redundant information while retaining the useful information as much as possible. Secondly, in order to capture the complicated intra-modal and inter-modal relation among the physiological signals, a feature fusion model is established to extract the discriminative and salient information. Third, in order to deal with the case of incomplete modal data, a robust model based on deep joint learning is proposed to capture the semantic information. Finally, we embedded the proposed models into the dimensional emotion recognition, and test the effectiveness of the proposed multimodal feature fusion models. The project as well as its results have important theoretical and practical significance for enriching the research field of multimodal emotion recognition.
多模态情感识别是情感计算和人机交互领域的新兴研究方向,对实现更高效更智能的人机交互起着至关重要的作用。生理信号与情感联系紧密,能真实和客观地反映人的情感状态变化。因此,本项目从生理信号的自身特性出发,在多模态情感识别的相关理论基础上,开展面向维度情感识别的多模态生理信号融合研究。内容包括:第一,针对单模态多通道生理信号中冗余信息多和数据量大的问题,建立有效的特征融合模型,降低计算复杂度的同时,去除冗余信息并保留与情感相关的信息。第二,针对多模态多通道生理信号中存在的相关信息形式复杂的问题,构建基于分级思想的多模态特征融合模型,有效地挖掘模态内和模态间的内在联系。第三,针对生理信号中的模态缺失问题,充分挖掘模态间的隐藏语义信息,建立鲁棒的联合表示融合模型。最后通过实验验证各模型在维度情感识别中的有效性。该项目研究及其成果对于丰富多模态情感识别的研究内涵,有着重要的理论和实际意义。
神经学、心理学诸多研究成果表明,生理信号与人类情感关系密切。本项目在多模态特征融合的理论基础上,充分考虑生理信号的自身特性,针对多模态生理信号情感识别中面对的关键问题,探索与构建了针对生理信号的多模态特征融合模型,有效提高了情感识别的准确性和鲁棒性。主要成果包括:1)提出了自适应改进生理信号质量的方法,降低了噪声干扰和伪迹对信号的干扰,提升了生理信号的质量;2)提出了融合手工特征和深度特征的情感识别方法,有效提升了情绪特征的判别性和鲁棒性,提高了情感识别的准确度;3)考虑生理特征的个性化差异,提出了基于聚类算法的个性化多模态情感识别算法。基于本项目构建的多模态情感分析与识别方法在公共安全、医疗健康、人机交互等领域具有重要理论意义和学术价值。
{{i.achievement_title}}
数据更新时间:2023-05-31
基于改进LinkNet的寒旱区遥感图像河流识别方法
基于小波高阶统计量的数字图像来源取证方法
结合词性、位置和单词情感的内存网络的方面的情感分析
基于注意力机制和多尺度残差网络的农作物病害识别
移动情境感知环境下的用户行为模式挖掘算法研究
面向不同数据质量的多模态生理情感数据融合降维方法研究
基于多模态情感识别的人机交流氛围场建模方法
面向交互式情感计算的多模态信息融合建模研究
基于多模态信息融合特征识别的宁夏地区秸秆焚烧监测算法研究