稀疏表示Michael-Elad课件

上传人:仙*** 文档编号:241635691 上传时间:2024-07-11 格式:PPT 页数:42 大小:2.96MB
返回 下载 相关 举报
稀疏表示Michael-Elad课件_第1页
第1页 / 共42页
稀疏表示Michael-Elad课件_第2页
第2页 / 共42页
稀疏表示Michael-Elad课件_第3页
第3页 / 共42页
点击查看更多>>
资源描述
MMSE Estimation for Sparse Representation Modeling Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000,Israel*Joint work with Irad Yavneh&Matan ProtterThe CS Department,The TechnionApril 6th,2009 Noise Removal?In this talk we focus on signal/image denoising qImportant:(i)Practical application;(ii)A convenient platform for testing basic ideas in signal/image processing.qMany Considered Directions:Partial differential equations,Statistical estimators,Adaptive filters,Inverse problems®ularization,Wavelets,Example-based techniques,Sparse representations,qMain Massage Today:Several sparse representations can be found and used for better denoising performance we introduce,motivate,discuss,demonstrate,and explain this new idea.Remove Additive Noise?2MMSE Estimation for Sparse Representation ModelingBy:Michael Elad1.Background on Denoising with Sparse Representations2.Using More than One Representation:Intuition3.Using More than One Representation:Theory4.A Closer Look At the Unitary Case5.Summary and Conclusions Agenda3MMSE Estimation for Sparse Representation ModelingBy:Michael EladPart I Background on Denoising with Sparse Representations4MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Relation to measurements Denoising By Energy Minimization Thomas Bayes 1702-1761Prior or regularizationy:Given measurements x:Unknown to be recoveredMany of the proposed signal denoising algorithms are related to the minimization of an energy function of the formqThis is in-fact a Bayesian point of view,adopting the Maximum-A-posteriori Probability(MAP)estimation.qClearly,the wisdom in such an approach is within the choice of the prior modeling the signals of interest.5MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Sparse Representation ModelingM KNA fixed DictionaryqEvery column in D(dictionary)is a prototype signal(atom).qThe vector is generated randomly with few(say L for now)non-zeros at random locations and with random values.A sparse&random vectorN6MMSE Estimation for Sparse Representation ModelingBy:Michael EladD-y=-Back to Our MAP Energy Function qThe L0“norm”is effectively counting the number of non-zeros in.qThe vector is the representation(sparse/redundant).qBottom line:Denoising of y is done by minimizing or7MMSE Estimation for Sparse Representation ModelingBy:Michael EladqNext steps:given the previously found atoms,find the next one to best fit the residual.qThe algorithm stops when the error is below the destination threshold.qThe MP is one of the greedy algorithms that finds one atom at a time Mallat&Zhang(93).qStep 1:find the one atom that best matches the signal.qThe Orthogonal MP(OMP)is an improved version that re-evaluates the coefficients by Least-Squares after each round.The Solver We Use:Greed Based 8MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Orthogonal Matching Pursuit Initialization Main Iteration1.2.3.4.5.StopYesNoOMP finds one atom at a time for approximating the solution of 9MMSE Estimation for Sparse Representation ModelingBy:Michael EladPart II Using More than One Representation:Intuition10MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Back to the Beginning.What If Consider the denoising problemand suppose that we can find a group of J candidate solutionssuch that Basic Questions:qWhat could we do with such a set of competing solutions in order to better denoise y?qWhy should this help?qHow shall we practically find such a set of solutions?Relevant work:Larsson&Selen(07)Schintter et.al.(08)Elad and Yavneh(08)11MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Motivation General Why bother with such a set?qBecause each representation conveys a different story about the desired signal.qBecause pursuit algorithms are often wrong in finding the sparsest representation,and then relying on their solution is too sensitive.q Maybe there are“deeper”reasons?DD12MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Our MotivationqAn intriguing relationship between this idea and the common-practice in example-based techniques,where several examples are merged.qConsider the Non-Local-Means Buades,Coll,&Morel(05).It uses (i)a local dictionary(the neighborhood patches),(ii)it builds several sparse representations(of cardinality 1),and (iii)it merges them.qWhy not take it further,and use general sparse representations?DD13MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Generating Many Representations Our Answer:Randomizing the OMPInitialization Main Iteration1.2.3.4.5.StopYesNo*Larsson and Schnitter propose a more complicated and deterministic tree pruning method*For now,lets set the parameter c manually for best performance.Later we shall define a way to set it automatically14MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Lets Try Proposed Experiment:qForm a random dictionary D.qMultiply by a sparse vector 0().qAdd Gaussian iid noise v with=1 and obtain .qSolve the problem using OMP,and obtain .qUse Random-OMP and obtain .qLets look at the obtained representations 100200D+=15MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Some Observations We see thatThe OMP gives the sparsest solutionNevertheless,it is not the most effective for denoising.The cardinality of a representation does not reveal its efficiency.16MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The Surprise(at least for us)Lets propose the average as our representationThis representation IS NOT SPARSE AT ALL but it gives17MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Is It Consistent?Yes!Here are the results of 1000 trials with the same parameters?Cases of zero solution18MMSE Estimation for Sparse Representation ModelingBy:Michael EladPart III Using More than One Representation:Theory 19MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Our Signal ModelKNA fixed DictionaryqD is fixed and known.qThe vector is built by:Choosing the support s with probability P(s)from all the 2K possibilities.For simplicity,assume that|s|=k is fixed and known.Choosing the s coefficients using iid Gaussian entries N(0,x).qThe ideal signal is x=D=Dss.The p.d.f.P()and P(x)are clear and known20MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Adding NoiseKNA fixed Dictionary+Noise Assumed:The noise v is additive white Gaussian vector with probability Pv(v)The conditional p.d.f.s P(y|s),P(s|y),and even also P(x|y)are all clear and well-defined(although they may appear nasty).21MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The Key The Posterior P(x|y)We have access toMAPMMSEqThe estimation of and multiplication by D is equivalent to the above.qThese two estimators are impossible to compute,as we show next.Oracle known support s22MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Lets Start with The Oracle*When s is known*Comments:This estimate is both the MAP and MMSE.The oracle estimate of x is obtained by multiplication by Ds.23MMSE Estimation for Sparse Representation ModelingBy:Michael EladWe have seen this as the oracles probability for the support s:The MAP Estimation24MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The MAP EstimationImplications:qThe MAP estimator requires to test all the possible supports for the maximization.In typical problems,this is impossible as there is a combinatorial set of possibilities.qThis is why we rarely use exact MAP,and we typically replace it with approximation algorithms(e.g.,OMP).25MMSE Estimation for Sparse Representation ModelingBy:Michael EladThis is the oracle for s,as we have seen before The MMSE Estimation26MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The MMSE EstimationImplications:qThe best estimator(in terms of L2 error)is a weighted average of many sparse representations!qAs in the MAP case,in typical problems one cannot compute this expression,as the summation is over a combinatorial set of possibilities.We should propose approximations here as well.27MMSE Estimation for Sparse Representation ModelingBy:Michael EladThis is our c in the Random-OMP The Case of|s|=k=1 The k-th atom in DqBased on this we can propose a greedy algorithm for both MAP and MMSE:MAP choose the atom with the largest inner product(out of K),and do so one at a time,while freezing the previous ones(almost OMP).MMSE draw at random an atom in a greedy algorithm,based on the above probability set,getting close to P(s|y)in the overall draw.28MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Bottom LineqThe MMSE estimation we got requires a sweep through all supports(binatorial search)impractical.qSimilarly,an explicit expression for P(x/y)can be derived and maximized this is the MAP estimation,and it also requires a sweep through all possible supports impractical too.qThe OMP is a(good)approximation for the MAP estimate.qThe Random-OMP is a(good)approximation of the Minimum-Mean-Squared-Error(MMSE)estimate.It is close to the Gibbs sampler of the probability P(s|y)from which we should draw the weights.Back to the beginning:Why Use Several Representations?Because their average leads to a provable better noise suppression.29MMSE Estimation for Sparse Representation ModelingBy:Michael Elad0.511.5s00.050.10.150.20.250.30.350.40.450.5Relative Mean-Squared-Error Comparative Results The following results correspond to a small dictionary(2030),where the combinatorial formulas can be evaluated as well.Parameters:N=20,K=30 True support=3x=1J=10(RandOMP)Averaged over 1000 experiments201.Emp.Oracle2.Theor.Oracle3.Emp.MMSE4.Theor.MMSE5.Emp.MAP6.Theor.MAP7.OMP8.RandOMPKnown support30MMSE Estimation for Sparse Representation ModelingBy:Michael EladPart IV A Closer Look At the Unitary Case 31MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Few Basic Observations Let us denote(The Oracle)32MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Back to the MAP EstimationWe assume|s|=k fixed with equal probabilities*This part becomes a constant,and thus can be discardedThis means that MAP estimation can be easily evaluated by computing,sorting its entries in descending order,and choosing the k leading ones.33MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Closed-Form EstimationqIt is well-known that MAP enjoys a closed form and simple solution in the case of a unitary dictionary D.qThis closed-form solution takes the structure of thresholding or shrinkage.The specific structure depends on the fine details of the model assumed.qIt is also known that OMP in this case becomes exact.What about the MMSE?Could it have a simple closed-form solution too?34MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The MMSE Again This is the formula we got:=We combine linearly many sparse representations(with proper weights)+The result is one effective representation(not sparse anymore)35MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The MMSE Again This is the formula we got:qWe change the above summation towhere there are K contributions(one per each atom)to be found and used.qWe have developed a closed-form recursive formula for computing the q coefficients.36MMSE Estimation for Sparse Representation ModelingBy:Michael Elad Towards a Recursive Formula We have seen that the governing probability for the weighted averaging is given by Indicator function stating if j is in s37MMSE Estimation for Sparse Representation ModelingBy:Michael Elad The Recursive Formula where38MMSE Estimation for Sparse Representation ModelingBy:Michael Elad0.511.5s00.070.080.090.1Relative Mean-Squared-Error This is a synthetic experiment resembling the previous one,but with few important changes:2OracleKnown support An Example D is unitaryThe representations cardinality is 5(the higher it is,the weaker the Random-OMP becomes)Dimensions are different:N=K=64J=20(RandOMP runs)Theor.MAPOMPRecursive MMSETheor.MMSERandOMP39MMSE Estimation for Sparse Representation ModelingBy:Michael EladPart V Summary and Conclusions40MMSE Estimation for Sparse Representation ModelingBy:Michael Elad写在最后写在最后成功的基成功的基础在于好的学在于好的学习习惯The foundation of success lies in good habits41 结束语当你尽了自己的最大努力时,失败也是伟大的,所以不要放弃,坚持就是正确的。When You Do Your Best,Failure Is Great,So DonT Give Up,Stick To The End演讲人:XXXXXX 时 间:XX年XX月XX日
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!