This project focus on the problem of the diversity and complexity of the neural network structure in the research field of deep learning. It aims to propose a structural optimization algorithm in deep CNN network design and guide a rapid construction of an efficient and lightweight deep network in the process of both training and reasoning. In order to deal with the problem most of the current deep CNN nets only concentrate a unique function, such as performance, accuracy, efficiency and speed, during design. We propose an optimization methodology by effectively combining the advanced algorithms of network compression and convergence, which is obtained by compressing an initial CNN network to the maximum without increasing further redundancy and overhead. Based on our research basis, we propose an optimization algorithm that combines the coarse-grained network layer with fine-grained pruning at the full-connection layer for network compression; As for network convergence, we propose a channel-based correlation calculation and channel grouping based on convolutional network layers. Meanwhile, we also propose a joint network optimization mechanism based on fusion and pruning; Last, this project is also dedicated to mining the scientific theory behind the deep neural network, and proposes mathematical models and representational functions that reasonably represent CNN information and its transmission. The project not only helps solve some of the scientific challenges in the network optimization process for CNN design, but also provides a guidance for implementing CNN design for different data sets and target applications.
本项目针对当前深度学习领域中深度神经网络结构的多样性与复杂性问题,旨在提出在深度神经网络设计中的结构优化算法,用于指导在训练和推理过程中如何快速高效的构建轻量级深度网络。针对当前CNN结构中只注重性能、精度、效率、速度等单一要素的问题,本项目提出通过有效结合网络压缩与融合的先进算法,使得在保证性能情况下最大限度的压缩网络、在不增加冗余和开销情况下最大限度的融合特征。基于我们的研究基础,针对网络压缩,我们提出卷积层粗粒度和全连接层细粒度剪枝相结合的优化算法;针对网络融合,我们提出基于卷积网络层通道相关性计算和通道分组的优化算法;最后,我们提出基于融合与剪枝的联合网络优化机制;同时,本项目也致力于挖掘深度神经网络背后的科学理论,提出合理表征CNN信息与传递的数学模型和表征函数。该项目的实施不仅解决CNN设计中网络优化面临的部分科学难题,同时也为针对不同数据集和目标应用的CNN设计提供指导。
本课题始终聚焦当前深度学习领域中深度神经网络结构的多样性与复杂性问题,旨在提出在深度神经网络设计中的结构优化算法,用于指导在训练和推理过程中如何快速高效的构建轻量级深度网络。本项目通过研究多种特征提取方法和压缩优化网络相结合的方式,采用多粒度特征提取、深度特征估计、微变化检测识别等方法在主流的目标应用上的不断提升。本项目提出了适用于TinyML的特征提取和网络优化方法,并成功的应用于几个典型应用,同时,基于此项目的成果,我们拓展了下一步基于TinyML研究成果实现具有自主知识产权的可重构MCU的项目研究和实现开端。依托该项目。在国内外重要期刊和学术会议发表论文18篇,其中EI会议论文5篇,SCI论文13篇,申请发明专利12项,已经授权2项。培养研究生18人,博士生4人,14人获得硕士学位,其余4人在读。另外本项目还获得成果转化一项。
{{i.achievement_title}}
数据更新时间:2023-05-31
论大数据环境对情报学发展的影响
基于 Kronecker 压缩感知的宽带 MIMO 雷达高分辨三维成像
内点最大化与冗余点控制的小型无人机遥感图像配准
基于多模态信息特征融合的犯罪预测算法研究
基于细粒度词表示的命名实体识别研究
基于深度学习的压缩感知成像技术研究
单光子成像与深度学习融合的仿生视觉芯片关键技术研究
基于自动学习的深度置信网络及多源信息融合的复杂环境下室内定位关键技术研究
基于压缩学习的深度神经网络专用处理芯片研究