To achieve more robust personal identification in unconstrained environments and overcome the shortcomings exhibited in uni-modal recognition that uses face or ear, this project proposes a scheme that starts with converting the acquired face and ear data by spherical transform to the form that is the object-centered. Mapping the converted data onto sphere surface, Multimodal fAce and eaR Spherical depth map and texture map (MARS maps) are then obtained. With MARS depth map and MARS texture map, two modals, face and ear, are naturally fused together and more complete structural and textural information are made available, which is definitely helpful to alleviate the problems induced by pose variations, facial expression, aging and occlusion when using face or ear alone for recognition. These spherical maps, in addition, are invariant to out-plane rotation, subsequently facilitating the successive alignment-free identification that is robust to pose variations. Meanwhile, the way of data presenting of the maps in 2D form reduces the overhead of data storage and the load of computation involved in recognition process. Personal identification in unconstrained environments is basically the identification that uses partial data. Therefore, this project will try to explore face and ear recognition methods based on the local features extracted from MARS maps. The scope of research of this project covers convertion method of structural and textural data of face and ear; construction of fast, highly discriminative and robust local feature descriptors; partial face and ear recognition strategy fusing structural information and textural information, etc. The output of the project is expected to contribute not only to the face and ear based multimodal personal identification in unconstrained environments, but also to theoretical and practical research in the areas that are relevant.
为实现更为鲁棒的非受控场景下的身份识别,克服单一模态识别的不足,本项目将通过球面变换方法将采集到的人脸人耳数据转换为以识别对象为中心进行表达,进而生成MARS图,即多模态人脸人耳球面深度图与球面纹理图。MARS图自然融合了人脸人耳两种模态,包含了更完整的结构信息和纹理信息,有助于克服人脸、人耳单模态识别中姿态、表情、年龄、遮挡问题带来的影响;消除了平面外旋转,有助于后续实现无需对准、对姿态鲁棒的识别;其二维表达形式可减少数据存储开销,降低识别过程的计算复杂度。非受控下的身份识别从根本上是利用部分数据所进行的识别,因此本项目研究基于MARS图局部特征的融合识别方法,内容涉及:人脸人耳二维三维数据表达转换,高鉴别性的快速鲁棒局部特征描述子构造,基于人脸人耳部分数据的结构信息与纹理信息融合识别策略等,其结果不仅对基于人脸人耳的非受控身份识别,而且对更广泛领域中的应用基础和理论研究都将是有意义的。
为实现更为鲁棒的非受控场景下的身份识别,克服单一模态识别的不足,本项目将通过球面变换方法将采集到的人脸人耳数据转换为以识别对象为中心进行表达,进而生成MARS图,即多模态人脸人耳球面深度图与球面纹理图。MARS图自然融合了人脸人耳两种模态,包含了更完整的结构信息和纹理信息,有助于克服人脸、人耳单模态识别中姿态、表情、年龄、遮挡问题带来的影响;消除了平面外旋转,有助于后续实现无需对准、对姿态鲁棒的识别;其二维表达形式可减少数据存储开销,降低识别过程的计算复杂度。非受控下的身份识别从根本上是利用部分数据所进行的识别,因此本项目研究基于MARS图局部特征的融合识别方法,内容涉及:人脸人耳二维三维数据表达转换,高鉴别性的快速鲁棒局部特征描述子构造,基于人脸人耳部分数据的结构信息与纹理信息融合识别策略等,其结果不仅对基于人脸人耳的非受控身份识别,而且对更广泛领域中的应用基础和理论研究具有重要意义。此外,本项目中重点研究了基于深度学习框架的的人耳特征提取、人耳关键点检测、人耳识别方法,研究了融合人脸和人耳的决策层融合和特征层融合方法,大大提高了非受控场景中,姿态变化和遮挡情况下人耳识别准确率。. 在项目研究过程中,共发表学术论文23篇,其中期刊论文13篇,会议论文10篇;所发表论文均被SCI、EI等重要检索系统收录。在项目研究过程中,共培养博士研究生7名,硕士研究生11名,其中1名博士生在读。
{{i.achievement_title}}
数据更新时间:2023-05-31
论大数据环境对情报学发展的影响
跨社交网络用户对齐技术综述
低轨卫星通信信道分配策略
基于多模态信息特征融合的犯罪预测算法研究
居住环境多维剥夺的地理识别及类型划分——以郑州主城区为例
基于人耳人脸信息融合的多模态生物特征识别技术
部分遮挡下人脸人耳融合识别方法研究
基于深度特征学习的非受控人脸识别研究
课堂环境下基于多模态信息融合的学习情感识别研究