It is necessary to provide outsourcing machine learning with privacy protection in the multi-user environment. Different from other scholars' research ideas, the project is not to study separately big data privacy, but to systematically explore the privacy-preserving theory and methods of multi-party oriented outsourcing machine learning by combining machine learning, computation outsourcing and cryptographic techniques from the point of view of the multi-user data. The project will enhance and stimulate the new development of the interdisciplinary studies. It aims at the following. (1) We will establish the provable privacy-preserving model for multi-party oriented outsourcing machine learning based on defining formally the adversary’s goals and capability. We will give the formula to calculate the quantity of privacy-preserving information through random perturbation. We also propose the fast computation method of Pufferfish privacy and the best way to design the practice-oriented provable privacy-preserving outsourcing machine learning in the multi-user environment. (2) We will propose multi-party oriented privacy-preserving outsourcing traditional machine learning schemes based on random perturbation, MPC and encryption, respectively. Basic approaches of privacy-preserving proof will be presented. The formal proof of all the above-mentioned schemes will be given under the provable privacy-preserving model from the point of view of computational complexity. (3) By combining differential privacy, MPC and homomorphic encryption, we will propose the generic framework of multi-party oriented privacy-preserving outsourcing deep learning systems. The rigid tight proof of these schemes will be addressed under the provable privacy-preserving model. The results of this project will also promote the development of provably secure cryptographic theory and machine learning themselves, respectively.
多用户环境下外包机器学习隐私保护必不可少。与其他学者研究思路不同,本项目不单独研究大数据隐私,而是结合机器学习、计算外包与密码技术,从数据来自多用户角度系统地探讨多方外包机器学习隐私保护理论和方法,旨在促进和刺激交叉科学的新发展。包括:(1)建立多方外包机器学习隐私保护可证模型, 形式化定义敌手目标与能力;给出随机扰动方法隐私保护信息量计算公式,提出关联数据Pufferfish隐私度量快速计算方法,寻找多方外包机器学习面向实际可证隐私保护方案的最佳设计途径;(2)分别基于随机扰动、MPC和加密等方法设计多方外包传统机器学习隐私保护方案; 找出隐私保护证明的基本方法,从计算复杂性角度给出各方案在隐私保护可证模型中的形式化证明;(3)结合差分隐私、MPC和同态密码,构造多方外包深度学习隐私保护通用框架,在正式模型中给出严格紧致性证明。本项目成果也必将促进可证密码学与机器学习本身的发展。
本项目结合机器学习、计算外包与密码技术,探讨多方外包机器学习隐私保护理论和方法。取得的研究成果包括:(1)建立了多方外包机器学习隐私保护模型,给出了多方外包计算隐私保护的两种定义,形式化刻画了多方外包机器学习中敌手的目标与能力;(2) 将单方外包计算隐私保护研究扩展到多方外包计算隐私保护,分别基于差分隐私、MPC和密码学等方法设计了多方外包梯度下降隐私保护OPPGD方案、多方外包矩阵乘法隐私保护OPPMM方案和多方外包矩阵分解隐私保护OPPMF方案以及针对医疗数据安全聚合隐私保护SecMedAgg方案。依照多方外包机器学习隐私保护模型,从计算复杂性角度给出方案隐私保护的形式化证明,并对这些方案的计算复杂度以及通信复杂度进行了分析;(3)结合差分隐私、MPC和同态密码,提出多方外包深度学习隐私保护方案设计方法。发现基于差分隐私的多方外包机器学习方案的隐私泄露量化机制,研究了多方外包机器学习方案的隐私保护严格紧致性证明的基本方法。本项目成果有望促进可证密码学和机器学习本身的发展。
{{i.achievement_title}}
数据更新时间:2023-05-31
基于分形L系统的水稻根系建模方法研究
基于SSVEP 直接脑控机器人方向和速度研究
拥堵路网交通流均衡分配模型
卫生系统韧性研究概况及其展望
基于公众情感倾向的主题公园评价研究——以哈尔滨市伏尔加庄园为例
隐私保护的机器学习外包计算研究
保护隐私的多方机器学习关键技术研究
多方机器学习中的隐私保护关键技术研究
隐私保护机器学习关键技术研究