计量经济学英

上传人:w****2 文档编号:23768565 上传时间:2021-06-10 格式:PPT 页数:34 大小:282.98KB
返回 下载 相关 举报
计量经济学英_第1页
第1页 / 共34页
计量经济学英_第2页
第2页 / 共34页
计量经济学英_第3页
第3页 / 共34页
点击查看更多>>
资源描述
Multiple Regression Analysisw y = b0 + b1x1 + b2x2 + . . . bkxk + uw 2. Inference Assumptions of the Classical Linear Model (CLM) So far, we know that given the Gauss-Markov assumptions, OLS is BLUE, In order to do classical hypothesis testing, we need to add another assumption (beyond the Gauss-Markov assumptions) Assume that u is independent of x1, x2, xk and u is normally distributed with zero mean and variance s 2: u Normal(0,s2) CLM Assumptions (cont) Under CLM, OLS is not only BLUE, but is the minimum variance unbiased estimator We can summarize the population assumptions of CLM as follows y|x Normal(b0 + b1x1 + bkxk, s2) While for now we just assume normality, clear that sometimes not the case Large samples will let us drop normality .x1 x2The homoskedastic normal distribution with a single explanatory variable E(y|x) = b0 + b1xyf(y|x) Normaldistributions Normal Sampling Distributions errors theofn combinatiolinear a is it becausenormally ddistribute is 0,1Normal thatso ,Normal st variableindependen theof valuessample the on lconditiona s,assumption CLM Under thejb bbb bbb jjj jjj sd Var The t Test 1:freedom of degrees theNote by estimate tohave webecause normal) (vson distributi a is thisNote sassumption CLM Under the 221j knt tse knjj ssbbb The t Test (cont) Knowing the sampling distribution for the standardized estimator allows us to carry out hypothesis tests Start with a null hypothesis For example, H0: bj=0 If accept null, then accept that xj has no effect on y, controlling for other xs The t Test (cont) 0j H ,hypothesis null accept the o whether tdetermine torulerejection a withalong statistic our use then willWe : for statistic the form toneedfirst eour test w perform To t sett jjj bbb b t Test: One-Sided Alternatives Besides our null, H0, we need an alternative hypothesis, H1, and a significance level H1 may be one-sided, or two-sided H1: bj 0 and H1: bj 0c0 a1 aOne-Sided Alternatives (cont)Fail to reject reject One-sided vs Two-sided Because the t distribution is symmetric, testing H1: bj 0 is straightforward. The critical value is just the negative of before We can reject the null if the t statistic than c then we fail to reject the null For a two-sided test, we set the critical value based on a/2 and reject H1: bj 0 if the absolute value of the t statistic c yi = b0 + b1Xi1 + + bkXik + uiH0: bj = 0 H1: bj 0c0 a/21 a-ca/2Two-Sided Alternativesreject rejectfail to reject Summary for H0: bj = 0 Unless otherwise stated, the alternative is assumed to be two-sided If we reject the null, we typically say “xj is statistically significant at the a % level” If we fail to reject the null, we typically say “xj is statistically insignificant at the a % level” Testing other hypothesesA more general form of the t statistic recognizes that we may want to test something like H0: bj = aj In this case, the appropriate t statistic is teststandard for the 0 where, j jjja seat bb Confidence Intervals Another way to use classical statistical testing is to construct a confidence interval using the same critical value as was used for a two-sided test A (1 - a) % confidence interval is defined as ondistributi ain percentile 2-1 theis c where, 1 kn jj t sec abb Computing p-values for t tests An alternative to the classical approach is to ask, “what is the smallest significance level at which the null would be rejected?” So, compute the t statistic, and then look up what percentile it is in the appropriate t distribution this is the p-value p-value is the probability we would observe the t statistic we did, if the null were true Stata and p-values, t tests, etc. Most computer packages will compute the p-value for you, assuming a two-sided test If you really want a one-sided alternative, just divide the two-sided p-value by 2 Stata provides the t statistic, p-value, and 95% confidence interval for H0: bj = 0 for you, in columns labeled “t”, “P |t|” and “95% Conf. Interval”, respectively Testing a Linear Combination Suppose instead of testing whether b1 is equal to a constant, you want to test if it is equal to another parameter, that is H0 : b1 = b2 Use same basic procedure for forming a t statistic 21 21 bb bb set Testing Linear Combo (cont) 2112 2112222121 212121 2121 , of estimatean is where 2 ,2 then,Since bbbbbb bbbbbb bbbb Covs ssesese CovVarVarVar Varse Testing a Linear Combo (cont) So, to use formula, need s12, which standard output does not have Many packages will have an option to get it, or will just perform the test for you In Stata, after reg y x1 x2 xk you would type test x1 = x2 to get a p-value for the test More generally, you can always restate the problem to get the test you want Example: Suppose you are interested in the effect of campaign expenditures on outcomes Model is voteA = b0 + b1log(expendA) + b2log(expendB) + b3prtystrA + u H0: b1 = - b2, or H0: q1 = b1 + b2 = 0 b1 = q1 b2, so substitute in and rearrange voteA = b0 + q1log(expendA) + b 2log(expendB - expendA) + b3prtystrA + u Example (cont): This is the same model as originally, but now you get a standard error for b1 b2 = q1 directly from the basic regression Any linear combination of parameters could be tested in a similar manner Other examples of hypotheses about a single linear combination of parameters: n b1 = 1 + b2 ; b1 = 5b2 ; b1 = -1/2b2 ; etc Multiple Linear Restrictions Everything weve done so far has involved testing a single linear restriction, (e.g. b1 = 0 or b1 = b2 ) However, we may want to jointly test multiple hypotheses about our parameters A typical example is testing “exclusion restrictions” we want to know if a group of parameters are all equal to zero Testing Exclusion Restrictions Now the null hypothesis might be something like H0: bk-q+1 = 0, . , bk = 0 The alternative is just H1: H0 is not true Cant just check each t statistic separately, because we want to know if the q parameters are jointly significant at a given level it is possible for none to be individually significant at that level Exclusion Restrictions (cont) To do the test we need to estimate the “restricted model” without xk-q+1, , xk included, as well as the “unrestricted model” with all xs included Intuitively, we want to know if the change in SSR is big enough to warrant inclusion of xk-q+1, , xk edunrestrict isur and restricted isr where,1 knSSR qSSRSSRF ur urr The F statistic The F statistic is always positive, since the SSR from the restricted model cant be less than the SSR from the unrestricted Essentially the F statistic is measuring the relative increase in SSR when moving from the unrestricted to restricted model q = number of restrictions, or dfr dfur n k 1 = dfur The F statistic (cont) To decide if the increase in SSR when we move to a restricted model is “big enough” to reject the exclusions, we need to know about the sampling distribution of our F stat Not surprisingly, F Fq,n-k-1, where q is referred to as the numerator degrees of freedom and n k 1 as the denominator degrees of freedom 0 c a1 af(F) FThe F statistic (cont)rejectfail to reject Reject H0 at a significance level if F c The R2 form of the F statistic Because the SSRs may be large and unwieldy, an alternative form of the formula is useful We use the fact that SSR = SST(1 R2) for any regression, so can substitute in for SSRu and SSRur edunrestrict isur and restricted isr again where,11 2 22 knR qRRF ur rur Overall Significance A special case of exclusion restrictions is to test H0: b1 = b2 = bk = 0 Since the R2 from a model with only an intercept will be zero, the F statistic is simply 11 2 2 knR kRF General Linear Restrictions The basic form of the F statistic will work for any set of linear restrictions First estimate the unrestricted model and then estimate the restricted model In each case, make note of the SSR Imposing the restrictions can be tricky will likely have to redefine variables again Example: Use same voting model as before Model is voteA = b0 + b1log(expendA) + b2log(expendB) + b3prtystrA + u now null is H0: b1 = 1, b3 = 0 Substituting in the restrictions: voteA = b0 + log(expendA) + b2log(expendB) + u, so Use voteA - log(expendA) = b0 + b2log(expendB) + u as restricted model F Statistic Summary Just as with t statistics, p-values can be calculated by looking up the percentile in the appropriate F distribution Stata will do this by entering: display fprob(q, n k 1, F), where the appropriate values of F, q,and n k 1 are used If only one exclusion is being tested, then F = t2, and the p-values will be the same
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 图纸专区 > 小学资料


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!