Visual verification is a hot topic in the field of artificial intelligence and computer vision. The existing methods have following defects in applications: 1) Weak robustness. It is easy to misjudge before the adversarial samples, which are produced by adding tiny perturbations in original images. 2) Lack of interpretability. Semantic gap between model and human cognition makes the prediction unconvincing. To address above issues, the project focus on adversarial attack and defense in visual verification, including the following aspects. 1) Building a distance metric-driven attack model. It generates adversarial sample in the same metric space with original image, and makes the perturbation imperceptible. 2) Designing a defense network with adaptive selection of global and local reasoning. And using it to construct a robust verification model, the robustness comes from two aspects: adversarial sample data enhancement and network structure optimization. 3) Applying the above models into large-scale person re-identification task to verify the validity of them and explore the model interpretability and credibility. It will give quantitative evaluation of the prediction results and reasonable explanation. Applicant has relevant research foundations in visual verification and in deep learning. In addition, the feasibility of technical route has been fully justified in the early research. The research results can be widely used in many fields related to national economy and people's livelihood, such as identification and retrieval of suspects, users’ verification of security system, and automatic driving.
视觉比对是计算机视觉与人工智能领域的热点问题。现有的比对方法在实际应用中存在以下局限:1)鲁棒性较差。模型对微小扰动后的对抗样本极易误判。2)缺乏可解释性。模型与人类认知存在语义鸿沟,预测结果难以令人信服。针对以上问题,项目拟开展面向视觉比对任务的对抗样本攻击与防御研究,具体包括:1)研究对抗学习机制,构建相似度量驱动的攻击模型,以尽量少的干扰信息,使生成的对抗样本和原图度量空间一致且难以辨识。2)研究网络自动学习方法,构建全局和局部推理自适应选择的防御网络,从对抗样本数据增强和网络结构优化两方面构建鲁棒的比对模型。3)结合大规模行人重识别等应用,验证算法有效性,并探索模型可解释性和可信度度量,对预测结果给出定量评测和有效解释。申请人已具备相关研究基础,前期也已充分论证了技术路线的可行性。项目研究成果在嫌犯识别与检索、安保系统身份确认、自动驾驶等关系国计民生的众多领域都有广泛的应用前景。
视觉比对是计算机视觉与人工智能领域的热点问题。现有的比对方法在实际应用中存在以下局限:1)鲁棒性较差。模型对微小扰动后的对抗样本极易误判。2)缺乏可解释性。模型与人类认知存在语义鸿沟,预测结果难以令人信服。针对以上问题,项目开展面向视觉比对任务的对抗样本攻击与防御研究,具体包括:1)研究对抗学习机制,构建相似度量驱动的攻击模型,以尽量少的干扰信息,使生成的对抗样本和原图度量空间一致且难以辨识。2)提出一种新的基于查询的黑盒通用对抗扰动(UAP)攻击算法,算法采用坐标梯度估计方法结合重要性采样进行梯度估计,进而为加快收敛提出带有空间动量先验的坐标MI-FGSM方法更新UAP,在图像平均查询次数有限的前提下达到了与白盒攻击接近的效果。3)提出一种基于扰动信息的对抗样本检测方法和一种基于迭代梯度的对抗样本生成方法,对可能的对抗样本通过去除扰动信息进行净化,以保证模型准确性的同时提高防御能力,最终实现鲁棒的比对模型。4)结合大规模行人重识别应用,验证算法有效性,并探索模型可解释性和可信度度量,对预测结果给出定量评测和有效解释。项目研究成果在嫌犯识别与检索、安保系统身份确认、自动驾驶等关系国计民生的众多领域都有广泛的应用前景。
{{i.achievement_title}}
数据更新时间:2023-05-31
涡度相关技术及其在陆地生态系统通量研究中的应用
粗颗粒土的静止土压力系数非线性分析与计算方法
硬件木马:关键问题研究进展及新动向
自然灾难地居民风险知觉与旅游支持度的关系研究——以汶川大地震重灾区北川和都江堰为例
内点最大化与冗余点控制的小型无人机遥感图像配准
面向无人驾驶的对抗样本攻击与防御方法研究
面向人脸欺诈检测的对抗攻击与防御方法
网络边缘实体行为与多态攻击自防御关键技术研究
云系统低速流DoS攻击防御关键技术研究