Emotion decoding technology, which provides the impetus for solving various mental illness problems, is a vital element to boost the development of emotion evaluation and regulation theory. In recent years, emotion decoding based on electroencephalography (EEG) signals has attracted considerable attention and obtained promising technological achievements. Most of the current EEG-based emotion decoding approaches are supervised-learning based. Here, visual stimuli are utilized to trigger different emotions, and participants’ feedbacks are collected as ground truth for model training. However, this type of modeling would be commonly constrained by the limited labeled samples, and more importantly, it rarely considers the emotion decoding problem from a systemic point of view. In this project, we aim to develop unsupervised-learning based emotion decoding methods by using EEG and visual information and systematically study human emotion on the basis of the emotion reaction mechanism (from emotion occurrence to brain performance) and the triggering mechanism (from visual stimulation to emotion occurrence). First, we will estimate emotion-related dynamic spatial-temporal EEG features, describe the hidden complex and non-pairwise relationships among the trials and participants underlying various emotions, and propose a hypergraph-based EEG decoding system for human emotion recognition. Second, we will detect deep visual representations, develop a probabilistic hypergraph matching algorithm, examine possible triggering effects of the deep visual representations, and build a visual information based emotion recognition system by adopting fuzzy clustering technique. Third, we will develop a “stimulation-occurrence-performance” (SOP) based brain-computer interface (BCI) system with quantitative and qualitative validations and provide a real-time, self-adaptive and dynamic updated intelligent platform for emotion regulation. This project is expected to explore the relationships between emotion reaction mechanism and triggering mechanism, and further boost the development of unsupervised-learning based emotion decoding technique in the real applications.
情绪解码技术作为解决各种情绪问题的突破口,是改善情绪评估方法、发展情绪调节理论的关键。近年来,基于脑电的情绪解码受到越来越多的关注;通过视觉刺激诱发情绪发生,利用受试者的反馈标签,建立有监督学习的脑电解码模型。然而这类解码方法受限于标签数据不足,且忽略了情绪研究的系统性。本项目拟提出基于无监督学习的情绪解码技术,结合脑电和视觉信息,对情绪的反应机制(情绪-脑电)和诱发机制(视觉-情绪)进行系统性研究。拟首先揭示情绪在脑电中的动态变化特性,建立超图模型来描述高维、隐性数据结构关系,通过超图切割实现脑电情绪解码。其次,拟提出概率超图匹配模型,验证视觉深度信息对情绪的诱发性,利用模糊聚类实现视觉情绪解码。最后,拟创建脑机接口原型机,开发具有实时性、自适应性、动态更新性的智能情绪调节平台。本项目将探究情绪反应机制和诱发机制的相互作用原理,通过创建脑电接口平台来推动情绪解码技术在实际应用领域的发展。
研究背景:基于脑电信号的情绪解码提供了一种更为客观的方式来分析、认识、解码情绪。然而,针对现有脑电情绪解码技术的通用性和应用性问题,仍存在不少局限。主要表现在:(1)脑电情绪解码建模存在标签成本高的问题;(2)多从单一角度出发,缺乏对情绪的诱发机制和反应机制进行系统研究;(3)情感脑机接口的技术离实际工程应用仍有一段距离,缺乏在实际场景中的应用和验证。为此,本项目提出结合脑电信号和视频内容的无监督情绪解码技术研究,突破现有方法的局限,推进情感脑机接口的技术发展和实际应用。..研究内容:本项目设计视频刺激诱发下的脑电情绪实验,通过揭示不同情绪状态下的脑电时频空域的动态变化特性和深度视频特征的诱发作用,探究情绪状态、大脑神经活动、视频诱发内容三者之间的表征关联性,更深入、更系统地研究情绪的反应机制和诱发机制。进一步,指导无监督情绪解码建模研究和算法开发,并加以应用、验证在面向情绪调节的脑机接口平台中。..重要结果:本项目围绕无监督情绪解码研究及脑机接口应用,主要有以下四方面的研究成果。(1)本项目挖掘了主观体验情绪状态和客观呈现刺激效应对大脑情绪加工模式的影响,拓展了对动态情绪分析的神经生理学意义的认识。(2)本项目提出了无监督脑电情绪解码模型框架,有助于非平稳时序脑电信号的时空动态特性的表示和情绪解码任务的完成。(3)本项目发展了基于多模态分析的情绪解码研究,从脑电信号和视频内容来分别提取个体情绪偏好和固有情感信息,实现不同模态数据之间在情绪解码任务中的信息互补。(4)本项目尝试了基于视频刺激素材的智能情绪调节脑机接口平台,验证了自然范式下自适应情绪调节的可能性。..科学意义:本项目以情绪的“诱因-发生-表现”为核心,利用情绪在脑电信号中的反应机制和视频刺激素材对情绪的诱发机制,发展新型无监督情绪解码技术,促进智能情绪调节脑机接口平台的开发和应用,为推动情感脑机接口技术的发展提供了理论支持和科技创新。
{{i.achievement_title}}
数据更新时间:2023-05-31
病毒性脑炎患儿脑电图、神经功能、免疫功能及相关因子水平检测与意义
多能耦合三相不平衡主动配电网与输电网交互随机模糊潮流方法
基于LS-SVM香梨可溶性糖的近红外光谱快速检测
基于改进LinkNet的寒旱区遥感图像河流识别方法
大鼠尾静脉注射脑源性微粒的半数致死量测定
基于颅内脑电的神经信息解码与新型脑机接口研究
基于瞬态视觉诱发脑电的脑机接口实验研究
跨个体脑信号解码与零训练脑机接口研究
基于非监督学习的互适应脑机接口神经信息解析