(精品)复旦管理学院考博资料——计量经济学Lecture 4

上传人:沈*** 文档编号:253061445 上传时间:2024-11-28 格式:PPT 页数:27 大小:463.51KB
返回 下载 相关 举报
(精品)复旦管理学院考博资料——计量经济学Lecture 4_第1页
第1页 / 共27页
(精品)复旦管理学院考博资料——计量经济学Lecture 4_第2页
第2页 / 共27页
(精品)复旦管理学院考博资料——计量经济学Lecture 4_第3页
第3页 / 共27页
点击查看更多>>
资源描述
,*,单击此处编辑母版标题样式,单击此处编辑母版文本样式,第二级,第三级,第四级,第五级,Econometrics (I),Lecture 4 Asymptotic Theory and Maximum Likelihood Estimation,Dr. Sun,Pei,(,孙霈,),Associate Professor in Industrial Economics,School of Management,Fudan,University,Overview,Basics of Asymptotic Theory,Difference concepts of stochastic convergence,Law of large numbers and central limit theorem,Large Sample Properties of Estimators,Consistency,Asymptotic normality,Asymptotic efficiency,Maximum Likelihood Estimation and Estimators,Tests of General Restrictions on the Basis of MLE and Asymptotic Theory,2,Concepts of Convergence,Convergence in Probability,A sequence of random variables X,1, X,2, ,X,n,converge in probability to c if,When n goes to infinity, the probability that,X,n,is different from c converges to zero. c is the probability limit. Also write,3,Concepts of Convergence,Slutsky,Theorem,If,g,(X,n,) is a continuous function,This implies that if,plim(X,n,)=c, and,plim(Y,n,)=d,Moreover, it can be generalized to vectors and matrices. For example, if,X,nn,is a random matrix with,plim,X,=,C, then plim,X,-1,=,C,-1,4,Concepts of Convergence,Convergence in Mean Square (Quadratic Mean),For a sequence of random variables X,1, X,2, ,X,n, if,limE(X,n,) = c, and,limVar(X,n,) = 0, then,X,n,converges to c in mean square,Convergence in mean square implies convergence in probability (proof), but not the other way around.,5,Weak Law of Large Numbers,Let X,1, X,2, ,X,n,be independently and identically distributed with,E(X,i,) =, and,Var,(X,i,) =,2, (outliers are unlikely and observed infrequently),Proof:,6,Convergence in Distribution,For a sequence of random variables X,1, X,2, ,X,n, their corresponding cumulative distribution functions are F,1,(X), F,2,(X), ,F,n,(X,). If,X,n,converges in distribution to X, i.e.,F(X) is the limiting distribution of,X,n,In the,p.d.f,. form,Convergence in distribution does not necessarily lead to convergence in probability,7,Cramers Theorem,If,plim,X,n,= c, and,Y,n,converges in distribution to Y,In the multivariate case, if,then,8,Central Limit Theorem,Lindberg-Levy CLT,In the multivariate case,where,Q,is a finite positive definite matrix.,Lindberg-Feller CLT,9,Finite Sample Property of Estimators,Unbiasedness,:,Efficiency,Cramer-,Rao,Lower Bound (CRLB): A sufficient though not necessary condition for an estimator to be efficient,Mean squared error (MSE),10,Large Sample Property of Estimators,Large sample here does not mean a single sample whose size is gradually increased, but imply that there are many samples of fixed size and the size of these samples is gradually increased,Asymptotic,Unbiasedness,Consistency,An estimator is consistent if its asymptotically unbiased and its variance converges to zero,An estimator is consistent if the MSE converges to zero,Consistency is a minimal requirement for an estimator: Potential perverse effect,11,12,Large Sample Property of Estimators,Asymptotic efficiency,Asymptotic variance (,Avar,):,An estimator is asymptotically efficient if it is consistent and no other consistent estimator has a smaller asymptotic variance.,The smaller the,Avar, the faster will the asymptotic distribution collapse onto the parameter to be estimated,13,Asymptotic Property of OLS Estimators,Model Setup,Consistency,Asymptotic Normality,14,Asymptotic Property of OLS Estimators,Asymptotic Efficiency,If f (.) is a set of continuous and differentiable functions of the OLS estimators, they are asymptotically normal and are the consistent estimators of f (,),Properties of the error variance estimator,15,Maximum Likelihood Estimation,If we have,n,observations drawn from a population with parameters,and the joint density function,we can write down the likelihood function for the given parameters. The maximum likelihood estimator (MLE) is one that maximizes the likelihood function given the sample data,Y,.,16,Maximum Likelihood Estimation,In practice, we often use log-likelihood function as the objective function,First-order condition,Second-order condition,is negative definite,17,ML Estimators of the Linear Regression Model,Likelihood function,Solutions,18,Large Sample Properties of ML Estimators,Consistency,Asymptotic normality,19,Large Sample Properties of ML Estimators,Asymptotic efficiency,Invariance,If C(,) is continuous and differentiable, and is the MLE of, then C( ) is the MLE of,C(,),MLE of the linear regression model,Consistency,Asymptotic normality and efficiency,20,MLE of Linear Regression Model,21,Classical Tests of Parameter Restrictions,H,0,: H,1,:,C(,),can be linear or nonlinear functions of,. If linear,To test the null hypothesis, we can get three test statistics that have a limiting distribution of,Likelihood ratio test: Estimate the unrestricted and the restricted models and check if the difference in log-likelihood values is significantly different from zero,Wald,test: Estimate,and check if is close to zero. This is the idea that underlies the tests of general linear restrictions,Lagrange multiplier test: Estimate the restricted model and check if significantly different from zero,22,Likelihood Ratio Test,The likelihood ratio is defined as,Under H,0, the test statistic is,In the linear regression model,23,Wald,Test,We only use the unrestricted MLE. Under H,0,should be close to zero. The test statistic is,In the case of linear restrictions,R,=,q,In the linear regression model,Compare this with the F test for general linear restrictions with OLS estimation,24,Lagrange Multiplier Test,The unrestricted MLE is found by solving the score/gradient vector . When the score vector is evaluated at the restricted MLE, it should be still close to zero under H,0,.,The test statistic is,The relationship with the Lagrange multiplier (,),in the constrained optimization problem,25,Lagrange Multiplier Test,In the linear regression model,Run OLS on restricted model to obtain the residual, and run second regression of the residual on X to get the,uncentered,R,2,A large R,2,would be the evidence against H,0, since it would imply that the imposition of the restrictions leaves important information in the residual, which can be further explained by relaxing the restriction.,26,Comments on the Three Tests,Graphical illustration,It can be shown that when,n,goes to infinity, all the three test statistics converge to J*F(J,n-k,). So with large sample size, the three tests are asymptotically equivalent,The choice of the tests is largely based on the ease of computation,If the regression models before and after imposing the restrictions are both linear, use LR test,If the regression model is linear and becomes nonlinear after imposing the restrictions, use,Wald,test,If the regression model in nonlinear and becomes linear after imposing the restrictions, use LM test,27,
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!