Recently, neural networks have found wide applications in many scientific and engineering fields. The learning methods bear essential importance for the.performance of the neural networks. This project studies the most often used.online learning method for feedforward neural networks. Different from the usual probabilistic and non-monotonic convergence results, we have proved a series of deterministic and monotonic convergence results. We show that the error function will decrease in a monotonic manner after every training example has been used just once. Our findings should be helpful to further understand the essence, and make the most, of the online learning methods..On the application side, we use BP neural network to predict the ups and downs of Shanghai stock market, reaching the accuracy of up to 75%. We also combine the BP neural network with the feature mapping neural network for recognition of printed mathematical formula. More than 90% of the trainingexamples of 124 mathematical symbols are recognized successfully.
在线学习算法广泛地应用于神经网络权值的优化计算,但其收敛性迄今尚未证明。主要难点在于无法用“和的平方”来压倒“平方的和”。我们建立了一种概率型不等式,指出和的平方在某种概率意义下不小于某常数乘以平方的和,从而解决了这一关键问题,并成功地针对一种典型模型给出了收敛性。预期这一研究方法可以用来解决一批在线学习算法的收敛性问题。
{{i.achievement_title}}
数据更新时间:2023-05-31
基于公众情感倾向的主题公园评价研究——以哈尔滨市伏尔加庄园为例
惯性约束聚变内爆中基于多块结构网格的高效辐射扩散并行算法
物联网中区块链技术的应用与挑战
基于协同表示的图嵌入鉴别分析在人脸识别中的应用
一种改进的多目标正余弦优化算法
BP神经网络在线学习算法的确定型收敛性
神经网络学习算法收敛性研究
神经网络子空间学习算法的收敛性与鲁棒性
前馈神经网络容错学习算法的设计与确定型收敛性研究