Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/87309
|
Title: | 評審間意見一致程度之問題探討 The Problems of Interater Agreement |
Authors: | 楊麗華 Yang, Li-Hua |
Contributors: | 江振東 Chiang, Jeng-Tung 楊麗華 Yang, Li-Hua |
Keywords: | 對數線性模式 評審間意見一致程度 Loglinear model Interater agreement Kappa |
Date: | 1996 |
Issue Date: | 2016-04-28 11:48:47 (UTC+8) |
Abstract: | 本文主要在探討評審間意見一致程度的問題, 除了回顧kappa係數和weighted kappa係數及其應用外. 並討論如何採用對數線性模型中加入代表一致程度的參數來分析這類的問題. kappa係數提供應用者一個快速獲得評審間意見一致程度的指標, 而利用對數線性模型分析, 則可以獲得更多的訊息. 因此, 我們嘗試將kappa係數和對數線性模型作一對比. 此外,針對三個評審間意見一致程度的問題我們也引進K-ABC係數, 並與模型log(m-ijk) = u + l(A, i) + l(B, j) +l(C, k) + Delta*I(i=j=k) 作以對比, 利用模擬實驗將Delta(hat)與K(ABC)所對應的可能範圍列出, 供使用者參考. The focus of this study is on the measurement of interater agreement. Analyses in terms of kappa tyoe coefficients and in terms of loglinearmodel techniques are reviewed, and issues related to the two approaches adderssed. In additition, a new kappa type corfficient Kappa-ABC is introduceto provide indication of agreement among three raters. Its possible connections with the coefficient Delta in the model log(m-ijk) = u + l(A, i) + l(B, j) + j(C, k) + Delta*I (i=j=k) are studied. |
Reference: | I.Agresti, A. (1988) "A model for agreement between ratings on an ordinal scale." Biometrics, 44, 539-548. 2.Agresti, A. (1990) "Categorical data analysis." New York: Wiley. 3.Agresti, A. (1992) "Modelling pattems of agreement and disagreement." Statistical Methods in Medical Research, 201-218. 4.Bennett, E. M., Alpert,R. and Goldstein, A. C. (1 954) "Communications through limited response questioning." Public Opinion Quarterly, 18, 303-308. 5.Black.man, N. J-M. and Koval, J. J. (1993) "Estimating rater agreement in 2 X 2 tables : correction for chance and intraclass correlation." Applied Psychological Measurement, 17, 211-223. 6.Cicchetti, D. V. and Feinstein, A. R. (1990) "High agreement but low kappa : 1. The problems of two paradoxes." Journal 01 Clinical Epidemiology,43, 543-549. 7.Cicchetti, D. V. and Feinstein. A. R. (1990) "High agreement but low kappa :II. Resolving the paradoxes." Journal of Clinical Epidemiology, 43, 551-558. 8.Cohen,J. (1960) "A coefficient of agreement fornominal scale." Educational and Psychological Measurement 20,37-46. 9.Cohen,J. (1968) "Weighted kappa : Nominal scale agreement with provision for scaled disagreement or partial credit." Psychological Bulletin,70, 213-220. 10.Conger, A. J. (1980) "Integration and generalization of kappas for mutiple raters." Psychological Bulletin, 88 , 322-328. 11. Darroch, J. N. and McCloud, P. J. (1986) "Category distinguishability and observer agreement." Australian Journal of Statistics,28,371-388. 12.Dice, L. R. (1954) "Measures of the amount of ecologic association between species." Ecology26,297-302. 13 .Fleiss, J.L. and Cohen, J. (1969) "Large sample standard errors of jappa and weighted kappa." Psychological Bulletin, 72,323-327. 14.Fleiss,J.L. (1971) "Measuring nominal scale agreement among many raters." Psychological Bulletin, 76,378-382. 15 .Fleiss, J.L. (1975) "Measuring agreement between two judges on the presence or absence of a trait." Biometrics ,31,651-659. 16.Fleiss,J.L. (1981) "Statistical methods for rates and proportions." chapter 13 (2nd ed.) New York: Wiley. 17.Goodman,L.A. and Kruskal, W.H. (1954) "Measures of association for cross classification." Journal of the American Statistical Association ,49,732-764. 18.Hubert,L. (1977) "Kappa revisited. "Psychological Bulletin, 84,289-297. 19.James,I.R. (1983) "Analysis of nonagreements among multiple raters." Biometrics, 39, 651-657. 20.Landis,J.R. and Koch, G.G. (1975a) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (part I) ." Statistical Neerlandica , 29.101-123. 2l.Landis,J.R. and Koch, G.G. (1975b) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (Part II)." Statistical Neerlandica , 29.151 -16l. 22.Landis, J.R. and Koch, G.G. (1977a) "The measurement of observer agreement for categorical data." Biometrics ,33,159-174. 23 .Landis, J.R. and Koch, G. G. (1977b) "An application of hierarchical kappa-type statistics in the assessment of majority agreement among mutiple observers." Biometrics ,33 ,363-374. 24.Light,R. (1971) "Measures of response agreement for qualitative data :some generalizations and aiternatives." Psychological Bulletin, 76,365-377. 25 .Mak, TK. (1988) "Analyzing intraclass correlation for dichotomous variable." Applied Statistics, 37,344-352. 26.Maxwell,AE. and PiIliner, A.E. G. (1968) "MReDeriving coefficients of reliability and agreement for ratings." The British Journal of Mathematical and Statistical Psychology, 21,105-116. 27.Rogot,E. and Goldberg, I.D. (1966) "A proposed index for measuring agreement in test-retest studies." Journal of Chronic Diseases, 19,991-1006. 28.Scott,W.A (1955) "Reliability of content analysis: the case of nominal scale coding. " Public Opinion Quarterly ,19,32 1-325. 29.Tanner,M.A ane Young, M.A (1985) "Modeling agreement among raters." Journal of the American Statistical Association ,80,175-180. 30.Zwick,R. (1988) "Another look at interrater agreement." Psychological Bulletin, 103,3,374-378. |
Description: | 碩士 國立政治大學 統計學系 83354013 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#B2002002795 |
Data Type: | thesis |
Appears in Collections: | [統計學系] 學位論文
|
Files in This Item:
There are no files associated with this item.
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|