fCONTENT-VALIDITY-University-of-Florida内容效度佛罗里达大学ppt课件

上传人:20****08 文档编号:242190777 上传时间:2024-08-15 格式:PPT 页数:25 大小:109.12KB
返回 下载 相关 举报
fCONTENT-VALIDITY-University-of-Florida内容效度佛罗里达大学ppt课件_第1页
第1页 / 共25页
fCONTENT-VALIDITY-University-of-Florida内容效度佛罗里达大学ppt课件_第2页
第2页 / 共25页
fCONTENT-VALIDITY-University-of-Florida内容效度佛罗里达大学ppt课件_第3页
第3页 / 共25页
点击查看更多>>
资源描述
,Click to edit Master title style,Click to edit Master text styles,Second level,Third level,Fourth level,Fifth level,*,CONTENT VALIDITY,Jeffrey M. Miller,November, 2003,CONTENT VALIDITYJeffrey M. Mil,1,Origins,Content validity refers to the degree to which the content of the items reflects the content domain of interest (APA, 1954),Is the content about what we say the test is about?,OriginsContent validity refers,2,Distinct or Subsumed?,Guions (1980) Holy Trinity,1. Criterion-related (Predictive/Concurrent),2. Construct,3. Criterion,Cronbach (1984) / Messick (1989) The three are different methods of inquiry subsumed by the overarching construct validity,Distinct or Subsumed?Guions (,3,Current Definition,“Validity refers to the degree to which evidence and theory support the interpretations of test,scores,entailed by proposed uses of tests (AERA/APA/NCME, 1999),Current Definition“Validity re,4,So Does Content Matter?,Content is not a part of the score so it is not a part of validity (Messick, 1975; Tenopyr, 1977),Content is a precursor to drawing a score-based inference. It is evidence-in-waiting (Shepard, 1993; Yalow & Popham, 1983),Content is a feature of the test, not the score,So Does Content Matter?Conte,5,Precursors to “Sloppy Validation”?,The overarching construct validity paradigm relegates the status of content validity and justifies poor implementation,The current definition of validity relegates the status of content validity and justifies poor implementation,Intended or unintended, what then happens to the validation of content?,Precursors to “Sloppy Validati,6,Prophecy Fulfilled?,“We fear that efforts to withdraw the legitimacy of content representativeness as a form of validity may, in time, substantially reduce attention to the import of content coverage (Yalow & Popham, 1983).”,“Unfortunately, in many technical manuals, content representation is dealt with in a paragraph, indicating that selected panels of subject matter experts (SMEs) reviewed the test content, or mapped the items to the content standards and all is well (Crocker, 2003)”,Prophecy Fulfilled?“We fear th,7,Recent Argument,“Content representation is the only aspect of validation that can be completed prior to administering the test and reporting results. If this process yields disappointing results, there is still time to recoup” (Crocker, 2003),Recent Argument“Content repres,8,The Standard Procedure,Crocker & Algina (1986),Define the performance domain of interest,Select a panel of qualified experts in the content domain,Provide a structured framework for the process of matching items to the performance domain,Collect and summarize data from the matching process,The Standard ProcedureCrocker,9,Hambletons (1980) 12 Steps,Prepare and select objective or domain specifications,Clarify tests purposes, desirable formats, number of items, instruction for writing,Write items to measure the objectives,Item writers perform the initial edit,Systematically assess item match to objectives to determine representativeness,Perform additional item editing,Hambletons (1980) 12 StepsPre,10,Hambletons (1980) 12 Steps,Assemble the test,Select and implement method for setting standards for interpreting performance,Administer the test,Collect data addressing reliability, validity, and norms,Prepare users manual / technical manual,Conduct ongoing studies relating test to different situations and populations,Hambletons (1980) 12 StepsAss,11,Beyond “The Experts Agreed”,Although the procedures are explicit and detailed, ultimate assurance of content validity is based on the method on authority,Our training in the importance of the scientific method may explain why “The experts agreed” doesnt settle well.,We have the quantitative item analysis, factor analysis, IRT, and Cronbachs alpha in the same report as the qualitative expert agreement,Beyond “The Experts Agreed”Alt,12,Katzs Percentage (1958),Using this method, experts rate whether or not the item taps the objective on a yes or no dichotomous scale,Let yes =1 and no = 0,Then let n = the number of 1s for a particular rater,The proportion is simply the sum of the ns across all raters divided by the product of the total number of items (N) and the total number of raters (J),P = sum of n / (N*J),Katzs Percentage (1958)Using,13,The obvious limitations are:,Influence by the number of items and/or raters,Dichotomous decision (hence no degree of certainty/uncertainty),Inclusion of all items (hence no regard for individual item weighting),No inclusion of objectives that are NOT intended to be measured and/or multiple objectives,The obvious limitations are:,14,Klein & Kosecoffs Correlation (1975),Experts rate the importance of the objective on a 1 to 5 point Likert scale,The mean or median is used as an index of relative importance for an item,Then, judges rate how well the item matches each objective on a yes(1)/no(0) scale.,Let p = the proportion of judges who assign a 1 to an item on one objective,Let P = the sum of the ps for all items measuring a particular objective,Pearsons r is then computed using the P of objective importance and the P of item to objective match,Klein & Kosecoffs Correlation,15,Although this technique tries to control the problem of individual item weighting via rankings of importance AND includes the possibility of multiple objectives, the limitations are,Again, sensitivity to the number of items and the number of judges,The possibility of a high r when items do not match any objective,Although this technique tries,16,Aikens V (1985) content-validity coefficient,n experts rate the degree to which the item taps an objective on a 1 to c Likert-scale,Let lo = the lowest possible validity rating (usually, this is 1 on the Likert-scale),Let r = the rating by an expert,Let s = r lo,Let S = the sum of s for the n raters,Aikens V is then V = S / n*(c-1),The range will be from 0 to 1.0,A score of 1.0 is interpreted as all raters giving the item the highest possible rating,Aikens V (1985) content-valid,17,Aikens V can be used with a right-tailed binomial probability table to obtain statistical significance,Aikens V does not include,1. Objectives that are NOT intended to be measured,2. Multiple objectives,Aikens V can be used with a r,18,Rovinelli & Hambletons Index of Item-Objective Congruence (1977),Content experts rate items regarding how well they do (or do not) tap the established objective,s,The ratings are:,1: item clearly taps objective,0: unsure/unclear,-1: item clearly does not tap objective,Several competing objectives are provided for each item,A statistical formula (or SAS program) is then applied to the ratings of each item across raters.,The result is an index ranging from 1 to +1,Rovinelli & Hambletons Index,19,An index of 1 can be interpreted as complete agreement by all experts that the item is measuring all the wrong objectives,An index of +1 can be interpreted as complete agreement by all experts that the item is only measuring the correct objective,An index of 1 can be interpre,20,The index of item-objective congruence assumes that the item taps one and only one objective,However, there is a formula (and SAS code) for situations when an item taps more than one objective.,The index of item-objective co,21,Penfields (2003) Score Interval,Many of the quantification procedures address the mean rating for an item,An improvement would be to construct a confidence interval for the mean rating of an item.,We could then say that, given a mean rating of 3.42 on a 4-point Likert-scale, we are 95% certain that the true population mean rating is between 1.2 and 3.5 or that it is between 3.4 and 3.5 and determine the accuracy of expert agreement.,Penfields (2003) Score Interv,22,The traditional confidence interval,assumes a normal distribution for the sample mean of a rating scale. However, the assumption of population normality can not be justified when analyzing the mean of an individual scale item because 1.) the outcomes of the items are discrete, and 2.) the items are bounded by the limits of the Likert-scale.,The traditional confidence int,23,The Score confidence interval treats rating scale variables as outcomes of a binomial distribution. This asymmetric interval was shown to be robust to a lack of fit to a binomial distribution especially when the sample size and/or the number of scale categories is small (e.g., less than or equal to five).,The Score confidence interval,24,Conclusion,Content validity addresses the adequacy and representativeness of the items to the domain of testing purposes,Content validity is not usually quantified possibly due to 1.) subsuming it within construct validity; 2.) ignoring it as important; and/or 3.) relying on accepted expert agreement procedures,Indices are available, and there is a push towards improving the reporting of content validation procedures,ConclusionContent validity add,25,
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 办公文档 > PPT模板库


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!