Road Scene Understanding (RSU) plays a key role in autonomous driving for Intelligent Vehicle (IV). It involves using different sensors combined with automatic reasoning, in order to simulate the human cognitive abilities and create a synthetic representation of the environment around the vehicle. It is also an active and challenging topic for researchers all over the world. Due to the problem for present research which is only making semantic labeling with equivalent priority, confliction about huge amounts of data and limited computation resource, this research proposes a new framework which efficiently fuses selective Visual Attention Mechanism (VAM) into RSU solution. According to the emergency and importance, a priority task pool is built, and the scene data is classified into the dangerous, normal and auxiliary information. Then the corresponding objects' knowledge database is obtained in order to give clear tasks to visual attention model and attention shift. Combing top-down and bottom-up information, VAM focusing advantage and its efficient function on dealing with bottleneck effect in visual computing are ultilized. Considering the characteristic of traffic video streams, new events are recognized by using task-guided visual attention mechanism and temporal-spatial continuation events are verified by using particle filter probability estimation. Then an evidence accumulation strategy is utilized to give the event alert and confirmation. This research uses different analysis resources to different priority attention tasks, which makes more important information can be handled earlier and more timely, to leave adequate response time for actuators. Therefore, this research provided useful original ideas and feasible solution which provide one possiblity to solve the road scene understanding problem robustly, adaptively and timely. The expected results will help to task-guided visual attention theory and intelligent vehicle research.
道路场景理解RSU利用感知数据进行视觉计算,使车辆具有类似人类的环境认知能力,是智能车辆IV实现自主驾驶的关键,也是国内外学者积极探索的挑战性课题。针对目前研究仅对场景进行对等语义标记、海量数据与计算资源存在矛盾等问题,本课题提出将选择性视觉注意机制VAM有效融入RSU问题求解框架。由紧急度和重要度构建感知优先级任务池,将场景分为危险、正常及辅助性信息,建立目标知识库,为视觉关注及注意力转移提供明确任务。结合自顶向下与自下而上信息,利用VAM聚焦优势及其在解决视觉处理瓶颈效应的独特作用,考虑视频流特点,对"新出现事件"与"时空延续事件"用任务驱动VA及PF概率估计进行并行分析,用证据积累进行预警与确认。达到对场景实施不同关注等级的分析,重要信息优先处理与及时理解,进而给执行机构提供充裕的响应时间,为解决RSU快速、可靠、自适应等问题提供途径,并为任务驱动VA理论及IV研究提供原创性成果。
道路场景理解是智能辅助驾驶系统的基础与关键技术,是智能车辆实现自主驾驶的基础课题。课题研究利用感知数据进行视觉计算,使车辆具有类似人类的环境认知能力。项目主要研究基于不同的紧急度和重要度构建感知优先级任务池,结合自底向上数据驱动以及自上而下任务驱动的视觉注意机制来提取显著目标。建立模拟人类视觉关注、注意力转移及感知识别证据积累过程的感知模型。将选择性视觉注意机制融入交通场景感知,并利用深度学习技术探索稠密数据内在的信息。利用动态场景的视频流数据进行无监督目标特征学习,克服了人工选取特征的不足,并将数据驱动及先验知识融合在贝叶斯框架中。基于车辆驾驶参数与环境参数的注意机制优先级进行了自适应算法研究,研究了算法对环境变化、视频质量差异、弱小目标等情形的处理策略,提高了算法的自适应能力,并研究了相关优化算法。研究成果主要有:发表了4篇SCI/EI检索的论文,参与了若干次国内外学术交流与相关会议,对相关领域的一些主流研究方向进行了深入的跟踪研究,获得了很多有价值的研究数据与方法,对该课题建立了较完整的研究体系,并初步形成了一个相对完整的学术研究团队。
{{i.achievement_title}}
数据更新时间:2023-05-31
农超对接模式中利益分配问题研究
基于SSVEP 直接脑控机器人方向和速度研究
拥堵路网交通流均衡分配模型
面向云工作流安全的任务调度方法
基于细粒度词表示的命名实体识别研究
基于视觉计算的智能驾驶实时城市道路场景理解
道路车辆的信息检测及视觉理解
基于可量化脑皮层视觉认知模型的智能车辆前方道路环境理解
基于视觉注意机制的智能车辆目标检测方法研究