This project intends to study the sensorless motion sensing and realtime emotion computing issues, including their theoretic problems and key technologies. Firstly, in order to address critical limitations like device-dependent, space/granularity constraints of current research on motion sensing, it leverages heterogeneous channel state info (CSI) extracted from different sources to tackle the sensorless multi-granularity motion sensing issue. The theoretic foundation is to build a signal-motion mapping model to characterize the relationship between motions and signal distortions due to multi-path and fading in terms of CSI. Also, to deal with the heterogeneous data issue, we will propose a novel feature extraction method to obtain unified features, based on which we can distinguish multi-granularity motions. Secondly, in order to address major demerits like high complexity, scenario/dimensional constraints of current research on emotion computing, we explore multi-granularity motions to tackle the realtime emotion computing issue. The theoretic foundation is to build a motion-emotion migrating model to characterize the relationship between motions and emotions in the multi-dimension emotion space. Also, to deal with the heterogeneous semantics issue, we will design a novel fusion method to extract fused semantics of multi-granularity motions, based on which we can compute the multi-dimension emotion in realtime. Lastly, we build a prototype to evaluate our sensorless motion sensing and realtime emotion computing system. Moreover, the prototype will be used to take personalized activity and emotion logs for the tested subjects, given that the test subjects’ mental health can be evaluated accurately. By accumulating such logs from different subjects continuously, we aim to build a large motion-emotion database. In that way, we are able to provide not only theoretical foundations for emerging areas such as the intelligent medical services and the mental health, but also offer the real-world data supports to facilitate their research.
探索真实环境下动作无源感知及情感实时计算的模型与方法,对由此引出的理论问题及关键技术开展深入研究。首先,针对当前有源感知研究中设备依赖、空间受限、粒度粗糙等问题,开展多源异构信道数据驱动下多粒度动作无源感知研究。在理论层面,建立信号-动作映射模型,统一刻画受限物理空间下动作由于衰减、多径对信号产生的影响;在方法层面,针对多源信道数据异构性问题,研究融合特征驱动下多粒度动作感知方法。其次,针对当前情感计算研究中场景限制、维度单一、计算复杂等问题,开展多粒度动作数据驱动下多维度情感实时计算研究。在理论层面,建立动作-情感迁移模型,刻画多维度情感状态空间下,多粒度行为与情感之间的迁移关系;在方法层面,针对多粒度动作数据语义鸿沟问题,研究融合语义驱动下多维度情感计算方法。最后,研制动作-情感透明感知空间示范系统,构建行为-情感大数据库,为公共安全、智能家居、智慧医疗等领域建立研究平台与数据基础。
面向公共安全、智能家居、智慧医疗等领域对无源感知及情感实时计算的重大共性需求,针对当前有源感知研究中设备依赖、空间受限、粒度粗糙及情感计算研究中场景限制、维度单一、计算复杂等问题,结合大数据驱动的自底而上的研究手法与心理学知识引领的由上及下的研究手段,从"多源数据-基础理论-应用方法-系统验证"多层次解决关键科学问题 "多源异构信道信息驱动下多粒度动作感知及多维度情感计算研究"。本项目从研究多源异构行为-情感数据库构建、研究多粒度动作无源感知研究、研究多粒度动作数据驱动下多维度情感实时计算三个方面展开,构建了世界上首个视频-射频多通道情感数据库:Vi-CSI-S²AC数据库,提出了Vi-CSI-S²AC多通道融合情绪感知模型,提出了基于莱斯分布、菲涅尔区域理论及信道状态信息商的无源感知模型,提出了基于信道可视化理论的多维度情绪深度神经网络计算方法。提供一种新的多源异构数据分析心理状态的方式。基于以上研究工作,发表论文杂志论文7篇,会议论文1篇。其中第一作者JCR1区杂志5篇,CCF C类会议1篇。
{{i.achievement_title}}
数据更新时间:2023-05-31
玉米叶向值的全基因组关联分析
正交异性钢桥面板纵肋-面板疲劳开裂的CFRP加固研究
硬件木马:关键问题研究进展及新动向
基于 Kronecker 压缩感知的宽带 MIMO 雷达高分辨三维成像
基于SSVEP 直接脑控机器人方向和速度研究
面向多源数据的多粒度计算方法研究
融合多源体感信息的人体上肢动作识别与情感推断研究
三维装配模型多源信息融合、演化与多粒度重用研究
云计算环境下的多源信息服务系统研究