Benefiting from the development of deep learning based image recognition and language modeling, it becomes possible to automatically describe an image by generating a sentence. This problem, known as image captioning, is of great importance to the goal of enabling computers to understand images. Besides recognizing the objects in the image, the generator should also be able to analyze their states, understand the relationship among them and express the information in natural language. Existing works mainly focused on describing the factual aspects of the images, including the objects, movements, and their relations. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotion, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. Affective Image Captioning (AIC), namely, Image Captioning with Emotion (ICwE) is a newly emerged research area, which aims to automatically generate a sentence description with specified emotion for an image. It involves many fundamental theories and practical techniques, which makes its research significant in theory and useful in application. In this project, we will make an in-depth study on AIC. Our major research content includes: construction of image caption dataset with emotions, emotion distribution prediction, reference based long-short term memory and attention mechanisms for AIC. With our own developed key techniques and algorithms, we will implement an emotion-aware and semantic-preserved image captioning system. We aim to make some theoretical achievements, develop several novel techniques, and lay solid foundation on both theories and techniques for AIC.
基于深度学习的图像识别和语言建模的发展,使得为图像自动生成描述语句成为可能。图像描述为计算机理解图像提供重要支撑,它不仅需要识别出图像中的对象,还需要分析它们的状态,理解它们之间的关系,并用自然语言的形式表达出来。现有工作主要集中在描述图像的具体内容,包括对象、动作及它们之间的关系,而忽略了描述语句的风格化,如情感。情感描述在日常交流中非常普遍,并影响着决策制定和人际关系。情感图像描述的目的是为图像自动生成情感增强的描述语句。作为一个新兴的研究领域,其研究包括众多基础理论和实用技术,具有重要理论意义和广泛应用价值。本项目将对情感图像描述进行深入研究,主要研究内容有:情感图像描述数据集构建、情感分布预测、基于参照的长短时间记忆网络和注意机制,并在此基础上研发情感增强保持语义的图像描述系统。本项目力争在情感图像描述理论上有所突破,技术上有所创新,为该领域的理论研究和实际应用奠定基础。
高效、准确地为互联网图像生成语义丰富的描述语句能够为社交媒体舆情分析和互联网内容有效监管等国家重大需求提供技术支撑。本项目针对情感增强的图像描述技术开展了研究和开发工作,在视觉内容情感分析、图像描述、跨模态检索与匹配、无监督领域自适应等任务上取得了系列原创性成果。针对情感感知主观性不确定性、曝光偏差、跨模态关联缺失和领域漂移等挑战,项目组提出了基于多特征共享稀疏学习的情感离散分布预测、基于生理信号的个性化情感识别、基于极性一致深度注意力网络的细粒度视觉情感回归、基于情感结构嵌入的零样本情感识别、基于参照的长段时间记忆网络的图像描述、基于时序差分学习的图像描述、属性驱动的注意力技术、基于语义一致性的跨模态图像文本检索、基于优势度激活度空间内情感的图像和音乐端到端匹配、基于情感语义一致循环生成式对抗网络的图像情感领域自适应、基于多源对抗域聚合网络的语义分割多源领域自适应、多源蒸馏领域自适应等方法。项目组按计划完成了各项研究工作,发表或录用SCI检索国际期刊论文12篇(IEEE/ACM汇刊论文7篇),EI检索CCF推荐A类会议论文19篇,申请美发明专利1项,研发了多个原型系统,并在滴滴出行等多个企事业单位开展了示范应用,取得了一定的社会与经济效益。
{{i.achievement_title}}
数据更新时间:2023-05-31
跨社交网络用户对齐技术综述
低轨卫星通信信道分配策略
内点最大化与冗余点控制的小型无人机遥感图像配准
基于公众情感倾向的主题公园评价研究——以哈尔滨市伏尔加庄园为例
城市轨道交通车站火灾情况下客流疏散能力评价
听视觉融合情感描述与表达的关键问题研究
图像情感元素计算
图像多描述编码方法研究
图像语义自动文本描述技术研究