Advances in nowadays big data driven technologies have made artificial intelligence (AI) the first-class productivity and use platform. Activities such as machine learning, natural language processing, vision-based action recognition, and biometrics recognition are now as common on mobile devices as on full-sized desktop based servers. Rather, today's AI performance are not yet perfect enough as they are still limited by critical issues, e.g., the constrained resource for computing, impractical model adapter for real-life applications, and inefficient iteration or update when algorithms are employed heterogeneous data. What's more, such issues come to be more serious if AI technologies were designed for the industry of Internet of Things (IoT). Toward high performance AI computing paradigm for diverse IoT scenarios, e.g., fog computing, mobile computing, and mobile-cloud computing, this proposal aims to leverage heterogeneous big data analysis and resource defined model adaption to improve the performance of existing AI technologies atop IoT applications, by particularly considering the resource constraint and neural network tailoring. Furthermore, this proposal also employs an online learning based design to achieve a stream computing system for heterogeneous data. In contrast to the default learning (e.g., batch learning), the paradigm proposed by this proposal can be more flexible and dynamic to generate algorithms to adapt to different limits of IoT resources, and thus enable local training to further preserve data privacy. Last, to evaluate the significance and effectiveness of the proposed system, this proposal also aims to conduct comprehensive evaluations through real user behaviors.
大数据的不断发展促进了人工智能技术成为新一轮产业变革的核心驱动力,并引发经济结构的重大变革与社会生产力的整体提升。然而,当前人工智能计算模式仍旧存在资源开销过大、算法精度不足、模型迭代缓慢与数据冗余等关键问题。而这在当前物联网大数据应用场景中显得尤为突出。着眼于人工智能技术在物联网场景中的计算性能、模型精度、数据安全等问题,本研究拟通过异构大数据特点分析与计算资源感知等技术手段,设计相应的模型自适应生成方法,准确估计适用于特定物联网应用场景的算法模型,进而优化当前人工智能技术在物联网实际应用过程中存在的问题;考虑算法模型的快速迭代需求,本研究拟设计一种流式人工智能计算模式以实现算法模型的异步结构调整与训练。最后,本研究拟将集成阶段性研究成果,设计一种神经网络结构可随场景需求动态调节,资源开销自适应以及数据隐私保护的物联网人工智能系统及其原型实现,并依据系统原型的性能进行分析与评价。
随着大数据支撑的产业诸如“元宇宙”、“人工智能”、“物联网”的超高速发展,当前中国信息化产业在新世界格局背景下面临着新的挑战与机遇。事实上,世界信息产业经济结构正在不断进行重大变革与社会生产力的整体提升,这对面向元宇宙、人工智能及物联网的高性能大数据底座计算技术提出了更高的要求。通过本研究的工作开展,在课题周期内,本研究已经在人工智能计算模式资源开销过大 、算法精度不足、模型迭代缓慢与数据冗余等关键问题取得了实质性研究成果。具体来说,本研究通过异构大数据特点分析与计算资源感知等技术手段,设计了相应的模型自适应生成方法,准确估计适用于特定物联网应用场景的算法模型,进而优化当前人工智能技术在物联网实际应用过程中存在的问题。考虑算法模型的快速迭代需求,本研究还设计了一种流式人工智能计算模式以实现算法模型的异步结构调整与训练。另一方面,本研究实现了一种神经网络结构可随场景需求动态调节,资源开销自适应以及数据隐私保护的物联网人工智能系统及其原型实现,并依据系统原型的性能进行分析与评价。 展望未来,本研究拟持续开展在大数据底座技术上的新型技术,拟通过信息熵计算通过对海量异构大数据进行进一步优化。
{{i.achievement_title}}
数据更新时间:2023-05-31
外泌体在胃癌转移中作用机制的研究进展
珠江口生物中多氯萘、六氯丁二烯和五氯苯酚的含量水平和分布特征
中温固体氧化物燃料电池复合阴极材料LaBiMn_2O_6-Sm_(0.2)Ce_(0.8)O_(1.9)的制备与电化学性质
"多对多"模式下GEO卫星在轨加注任务规划
结直肠癌免疫治疗的多模态影像及分子影像评估
基于小样本异构数据和人工智能的罕见病的精确诊断
大数据流式计算能耗模型及优化研究
面向海量流式LBS数据的城市计算数据管理模型研究
异构计算平台下高效大图数据处理的运行时支撑环境研究