神经网络与模糊系统-课件

上传人:痛*** 文档编号:241629231 上传时间:2024-07-11 格式:PPT 页数:35 大小:1.57MB
返回 下载 相关 举报
神经网络与模糊系统-课件_第1页
第1页 / 共35页
神经网络与模糊系统-课件_第2页
第2页 / 共35页
神经网络与模糊系统-课件_第3页
第3页 / 共35页
点击查看更多>>
资源描述
神神经网网络与模糊系与模糊系统学生:学生:导师:ARCHITECTURE AND EQUILIBRIA结构和平衡构和平衡CHAPTER 6L函数与系统的稳定性对于一个系统构造一个Lyapunov方程若 ,系统稳定若 ,系统渐进稳定系统稳定 能构造 L方程能构造 系统稳定PREFACE李雅普诺夫切比雪夫马尔科夫圣彼得堡数学学派6.1 Neutral Network As Stochastic Gradient system1.synaptic connection topologiesFeedforwardFeedback2.how learning modifies their connection topologiesSupervised训练数据训练数据label特征特征模型模型训练训练a.训练训练测试数据测试数据特征特征模型模型labelb.测试测试Unsupervised2.how learning modifies their connection topologies训练数据训练数据特征特征模型模型训练训练a.训练训练测试数据测试数据特征特征模型模型结果b.测试测试K-meansNEURAL NETWORK TAXONOMYGRADIENT DESCENTLMSBACKPROPAGATIONREINFORCEMENT LEARNINGRECURRENT BACKPROPAGATIONVECTOR QUANTIZATIONSELF-ORGANIZING MAPSCOMPETITIVE LEARNINGCOUNTER-PROPAGATIONBABAM BROWNIAN ANNEALINGBOLTZMANN LEARNINGABAM ART-2 BAM-COHEN-GROSSBERG MODEL HOPFIELD CIRCUIT BRAIN-STATE-IN-A-BOX MASKING FILEDADAPTIVE RESONANCE ART-1 ART-2FeedforwardFeedbackSupervisedUnsupervisedDECODINGEDCODING6.2 Global Equilibria:convergence and stabilityThree dynamical systems in neural network:1)synaptic dynamical system 2)neuronal dynamical system3)joint neuronal-synaptic dynamical systemEquilibrium is steady state(for fixed-point attractors).Convergence is synaptic equilibrium:Stability is neuronal equilibrium:More generally neural signals reach steady state even though the activations still change.Steady state:Global StabilityStochastic Global StabilityStability-Convergence dilemmaNeurons fluctuate faster than synapses fluctuate.Learning tends to destroy the neuronal patterns being learned.Convergence undermines stability.6.3 Synaptic convergence to centroids:AVQ AlgorithmsCompetitive learning adaptively quantizes the input pattern space .Probability density function characterizes the continuous distributions of patterns in .Competitive AVQ Stochastic Differential Equations:The decision classes partition into k classes:Centroid of :The random indicator functions :图像处理质心定位灰度灰度质心法心法灰度灰度质心法心法The Stochastic unsupervised competitive learning law:Equilibrium:As discussed in Chapter 4:The linear stochastic competitive learning law:The linear supervised competitive learning law:The linear differential competitive learning law:In practice:Competitive AVQ Algorithms1.Initialize synaptic vectors:2.For random sample ,find the closest synaptic vector :3.Update the winning synaptic vector(s)by the UCL,SCL,or DCL learning algorithm.Unsupervised Competitive Learning(UCL)defines a slowly decreasing sequence of learning coefficients.Example:Supervised Competitive Learning(SCL)Differential Competitive Learning(DCL)denotes the time change of the jth neurons competitive signal in the competition field :实际中,只使用该信号差的符号或The fixed competition matrix W defines a symmetric lateral inhibition Topology within .Stochastic Equilibrium and ConvergenceCompetitive synaptic vector converge to decision-class centroids.The centroids may correspond to local maxima of the sampled but unknown probability density function .AVQ centroid theorem:If a competitive AVQ system converges,it converges to the centroid of the sampled decision class.Proof.Suppose the jth neuron in wins the competition.Suppose the jth synaptic vector codes for decision class .Suppose .The competitive law .In general the AVQ centroid theorem concludes that at equilibrium:Q.E.D6.4 AVQ Convergence TheoremCompetitive synaptic vectors converge exponentially quickly to pattern-class centroids.Proof.Consider the random quadratic form L:Note .The pattern vectors x do not change in time.The competitive law .Chose the average EL as Lyapunov function for the stochastic competitive dynamical system.Assume:sufficient smoothness to interchange the time derivative and the probabilistic integralto bring the time derivative“inside”the integral.The competitive AVQ system is asymptotically stable,and in general converges exponentially quickly to a locally equilibrium.Suppose .Since p(x)is a nonnegative weight function,the weighted integral of the learning differences must equal zero:Average equilibrium synaptic vector are centroids:.Q.E.D6.5 Global stability of feedback neural networksGlobal stability is jointly neuronal-synaptic steady state.Global stability theorems are powerful but limited.Their power:their dimension independence.nonlinear generality.their exponentially fast convergence to fixed points.Their limitation:not tell us where the equilibria occur in the state space.Stability-Convergence Dilemma1.Asymmetry:Neurons in and fluctuate faster than the synapses in M.2.Stability:(pattern formation).3.Learning:4.Undoing:The ABAM theorem offers a general solution to stability-convergence dilemma.The RABAM theorem extends this result to stochastic neural processingin the presence of noise.6.6 The ABAM TheoremHebbian ABAM models:Competitive ABAM models:If the positivity assumptions hold,then the models are asymptotically stable.Proof.The proof uses the bounded lyapunov function L:along trajectories.Proves global stability for the competitive ABAM system.This proves asymptotic global stability.The squared velocities decease exponentially quickly.Q.E.DHigher-Order ABAMsAdaptive Resonance ABAMsDifferential Hebbian ABAMs关关键是找是找出解决出解决问题的的规律律 6.7 Structural Stability of Unsupervised LearningIs unsupervised learning structural stability?Structural stability is insensitivity to small perturbationsStructural stability ignores many small perturbations.Such perturbations preserve qualitative properties.Basins of attractions maintain their basic shape.Pattern SpaceManifold intersection in the plane(manifold ).Intersection points a and b are transversal.Point c is not:Manifolds B and C need not intersect if even slightly perturbed.No points are transversal in 3-space unless B is a sphere(or other solid).6.8 Random Adaptive Bidirectional Associative MemoriesBrownian diffusions perturb RABAM models.Suppose denote Brownian-motion(independent Gaussianincrement)processes that perturb state changes in the ith neuron in ,the jth neuron in ,and the synapse ,respectively.The diffusion RABAM corresponds to the adaptive stochastic dynamical system:We can replace the signal Hebb diffusion law with the stochasticcompetitive law,differential Hebbian or differential competitive diffusion laws,if we impose tighter constrains to ensure global stability.The signal-Hebbian noise RABAM model:The RABAM theorem ensures stochastic stability.In effect,RABAM equilibria are ABAM equilibria that randomly vibrate.The noise variances control the range of vibration.Average RABAM behavior equals ABAM behavior.RABAM Theorem.The RABAM model above is global stable.If signal functions are strictly increasing and amplification functions and are strictly positive,the RABAM model is asymptotically stable.Proof.The ABAM lyapunov function L:For the RABAM system:along trajectories according as Q.E.D6.9 Noise-Saturation Dilemma and the RABAM Noise-Suppression TheoremNoise-Saturation Dilemma:How neurons can have an effective infinite dynamical range whenthey operate between upper and lower bounds and yet not treat small input signals as noise:If the are sensitive to large inputs,then why do not small inputs get lost in internal system noise?If the are sensitive to small inputs,then why do they not all saturateat their maximum values in response to large inputs?RABAM Noise Suppression Theorem:As the above RABAM dynamical systems converge exponentially quickly,the mean-squared velocities of neuronal activations and synapses decrease to their lower bounds exponentially quickly:Guarantee:no noise processes can destabilize a RABAM if the noiseprocesses have finite instantaneous variances(and zero mean).Thank you!谢谢!谢谢!35
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!