政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/66502
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113648/144635 (79%)
Visitors : 51579102      Online Users : 691
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/66502


    Title: 混合試題與受試者模型於試題差異功能分析之研究
    A Mixture Items-and-Examinees Model Analysis on Differential Item Functioning
    Authors: 黃馨瑩
    Huang, Hsin Ying
    Contributors: 余民寧
    溫福星

    Yu, Min Ning
    Wen, Fur Hsing

    黃馨瑩
    Huang, Hsin Ying
    Keywords: 混合試題反應理論
    隨機試題
    試題差異功能
    mixture item response theory
    random item
    differential item functioning
    Date: 2013
    Issue Date: 2014-06-04 14:45:48 (UTC+8)
    Abstract: 依據「多層次混合試題反應理論」與「隨機試題混合模型」,本研究提出「混合試題與受試者模型」。本研究旨在評估此模型在不同樣本數、不同試題差異功能的試題數下,偵測試題差異功能的表現,以及其參數回復性情形。研究結果顯示,「混合試題與受試者模型」在樣本數大、試題差異功能試題數較多之情境下,具有正確的參數回復性,能正確判斷出試題是否存在試題差異功能,且具有良好的難度估計值,並能將樣本正確地分群,其也與「隨機試題混合模型」的估計表現頗為相近。建議未來可將「混合試題與受試者模型」應用於大型教育資料庫相關研究上,並加入其他變項後進一步探討。
    Drawing upon the framework of the multilevel mixture item response theory model and the random item mixture model, the study attempts to propose one model, called the mixture items and examinees model(MIE model). The purpose of this study was to assess the respective performances of the model on different sample-sizes and differential item functioning (DIF) items. Particularly, the study assessed the model performances in the detection of DIF items, and the accurate parameters recovery. The results of the study revealed that with large sample-sizes and more DIF items, the MIE model had the good parameters recovery, the accurate detection of the DIF items, the good estimate of the item difficulty, and the accurate classifications of the sub-samples. These model performances appeared similar to those of the random item mixture model. The findings suggest that future studies should apply the MIE model to the analyses on large-scale education databases, and should add more variables to the MIE model.
    Reference: 1.Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2001). A mixture item response for multiple choice data. Journal of Educational and Behavioral Statistics, 26, 381-409.
    2.Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331-348.
    3.Camilli, G. (1992). A conceptual analysis of differential item functioning in terms of a multidimensional item response model. Applied Psychological Measurement, 16, 129-147.
    4.Chaimongkol, S. (2005). Modeling differential item functioning (DIF) using multilevel logistic regression models: A Bayesian perspective. (Unpublished doctoral dissertation). Florida State University, Tallahassee, FL.
    5.Chaimongkol, S., Huffer, F. W., & Kamata, A. (2007). An explanatory differential item functioning (DIF) model by the WinBUG 1.4. Songklanakarin Journal of Science and Technology, 29(2), 449-459.
    6.Cheong, Y. F. (2006). Analysis of school context effects on differential item functioning using hierarchical generalized linear models. International Journal of Testing, 6(1), 57-79.
    7.Cho, S. J., & Cohen, A. S. (2010). Multilevel mixture IRT model with an application to DIF. Journal of Educational and Behavioral Statistics, 35, 336-370.
    8.Cho, S. J., Cohen, A. S., & Kim, S. H. (2006, June). An investigation of priors on the probabilities of mixtures in the mixture Rasch model. Paper presented at the International Meeting of the Psychometric Society: The 71st annual meeting of the Psychometric Society, Montreal, Canada.
    9.Cohen, A. S., & Bolt, D. M. (2005). A mixture model analysis of differential item functioning. Journal of Educational Measurement, 42, 133-148.
    10.Cohen, A. S., Cho, S. J., & Kim, S. H. (2005, April). A mixture testlet model for educational tests. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.
    11.Cohen, A. S., Gregg, N., & Deng, M. (2005). The role of extended time and item content on a high-stakes mathematics test. Learning Disabilities Research & Practice, 20, 225-233.
    12.Dai, Y. (2013). A mixture Rasch model with a covariate a simulation study via Bayesian Markov Chain Monte Carlo estimation. Applied Psychological Measurement, 37(5), 375-396.
    13.De Boeck, P. (2008). Random item IRT models. Psychometrika, 73, 533–559.
    14.De Boeck, P., Cho, S. J., & Wilson, M. (2011). Explanatory secondary dimension modeling of latent differential item functioning. Applied Psychological Measurement, 35, 583-603.
    15.DeAyala, R. J., Kim, S. H., Stapleton, L. M., & Dayton, C. M. (2002). Differential item functioning: A mixture distribution conceptualization. International Journal of Testing, 2, 243-276.
    16.Demar, C. E., & Lau, A. (2011). Differential item functioning detection with latent classes: How accurately can we detect who is responding differentially? Educational and Psychological Measurement, 71(4), 597-616.
    17.Dorans, N. J., & Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the scholastic aptitude test. Journal of Educational Measurement, 23(4), 355-368.
    18.Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence-Erlbaum.
    19.Finch, W. H. (2005). The MIMIC model as a method for detecting DIF: Comparison with Mantel-Haenszel, SIBTEST, and the IRT Likelihood Ratio. Applied Psychological Measurement, 29, 278-295
    20.Finch, W. H. (2012).The MIMIC model as a tool for differential bundle functioning detection. Applied Psychological Measurement, 36, 40-59.
    21.Fox, J. P. (2005). Multilevel IRT using dichotomous and polytomous response data. British Journal of Mathematical and Statistical Psychology, 58, 145-172.
    22.Frederickx, S., Tuerlinckx, F., De Boeck, P., & Magis, D. (2010). RIM: A random item mixture model to detect differential item functioning. Journal of Educational Measurement, 47, 432–457.
    23.French, B. F., & Finch, W. H. (2010). Hierarchical logistic regression: Accounting for multilevel data in DIF detection. Journal of Educational Measurement, 47(3), 299-317.
    24.Frick, H., Strobl, C., & Zeileis, A. (2013). Rasch mixture models for DIF detection: A comparison of old and new score specifications. Retrieved from http://eeecon.uibk.ac.at/wopec2/repec/inn/wpaper/2013-36.pdf
    25.Holland, P. W., & Thayer, D. T. (1988). Differential item performance and the Mantel-Haenszel procedure. In H. Wainer & H. I. Braun (Eds.), Test Validity (pp. 129-145). Hillsdale, NJ: Lawrence Erlbaum Associates.
    26.Jiao, H., Lissitz, R. W., Macready, G., Wang, S., & Liang, S. (2011). Exploring levels of performance using the mixture Rasch model for standard setting. Psychological Test and Assessment Modeling, 53(4), 499-522.
    27.Johnson, V., & Albert, J. (1998). Ordinal data modeling. New York: Springer.
    28.Kamata, A. (1998). Some generalizations of the Rasch model: An application of the hierarchical generalized linear model. (Unpublished doctoral dissertation). Michigan State University, East Lansing, MI.
    29.Kamata, A. (2001). Item analysis by the hierarchical generalized linear model. Journal of Educational Measurement, 38(1), 79-93.
    30.Kang, T., & Cohen, A. S. (2003, April). A mixture model analysis of ethnic group DIF. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
    31.Li, F., Cohen, A. S., Kim, S. H., & Cho, S. J. (2006, April). Model selection methods for mixture dichotomous IRT models. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.
    32.Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates.
    33.Lu, R., & Jiao, H. (2009, April). Detecting DIF using the mixture Rasch model. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA.
    34.Maier, K. S. (2002). Modeling incomplete scaled questionnaire data with a partial credit hierarchical measurement model. Journal of Educational and Behavioral Statistics, 27, 271-289.
    35.Maij-de Meij, A. M., Kelderman, H., & van der Flier, H. (2010). Improvement in detection of differential item functioning using a mixture item response theory model. Multivariate Behavioral Research, 45, 975-999.
    36.McLachlan, G., &, Peel, D. (2000). Finite mixture models. New York, NY: Wiley.
    37.Mislevy, R. J., & Verhelst, N. (1990). Modeling item responses when different subjects employ different solution strategies. Psychometrika, 55, 195-215.
    38.Navas-Ara, M. J., & Gomez-Benito, J. (2002). Effects of ability scale purification on the identification of DIF. European Journal of Psychological Assessment, 19, 9–15.
    39.Penfield, R. D. (2010). Modelling DIF effects using distractor-level invariance effects: Implications for understanding the causes of DIF. Applied Psychological Measurement, 34, 151-165.
    40.Rabe-Hesketh, S., Skrondal, A., & Pickles, A. (2004). Generalized multilevel structural equation modeling. Psychometrika, 69, 167–190.
    41.Raju, N. S. (1990). Determining the significance of estimated signed and unsigned areas between two item response functions. Applied Psychological Measurement, 14, 197–207.
    42.Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271-282.
    43.Rost, J. (1997). Logistic mixture models. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 449-463). New York: Springer.
    44.Roussos, L., & Stout, W. (1996). A multidimensionality-based DIF analysis paradigm. Applied Psychological Measurement, 20, 355-371.
    45.Rudner, L. M., Getson, P. R., & Knight, D. L. (1980). Biased item detection techniques. Journal of Educational Statistics, 6, 213-233.
    46.Samuelsen, K. (2005). Examining differential item functioning from a latent class perspective. (Unpublished doctoral dissertation). University of Maryland, College Park, MD.
    47.Samuelsen, K. (2008). Examining differential item functioning from a latent class perspective. In G. R. Hancock & K. M. Samuelsen (Eds.) Advances in latent variable mixture models (pp. 67-113). Charlotte, NC: Information Age.
    48.Shealy, R. & Stout, W. F. (1993). A model-based standardization approach that separates true bias/DIF from group differences and detects test bias/DIF as well as item bias/DIF. Psychometrika, 58, 159-194.
    49.Shepard, L. A. (1982). Definitions of bias. In R. A. Berk (Ed.), Handbook of methods for detecting test bias. London, UK: The John Hopkins Press.
    50.Shepard, L. A., Camilli, G., & Averill, M. (1981). Comparison of six procedures for detecting test item bias using both internal and external ability criteria. Journal of Educational Statistics, 6, 317-375.
    51.Shin, C. L., & Wang, W. C. (2009). Differential item functioning detection using the multiple indicators, multiple causes method with a pure short anchor. Applied Psychological Measurement, 33, 184-199.
    52.Snijders, T. A. B., & Bosker, R. (2011). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: Sage.
    53.Soares, T. M., Goncalves, F. B., & Gamerman, D. (2009). An integrated Bayesian model for DIF analysis. Journal of Educational and Behavioral Statistics, 34, 348–377.
    54.Swaminathan, H., & Rogers, H. J. (1990). Detecting differential item functioning using logistic regression procedures. Journal of Educational Measurement, 27(4), 361-370.
    55.Tay, L., Newman, D. A., & Vermunt, J. K. (2011). Using mixed-measurement item response theory with covariates (MM-IRT-C) to ascertain observed and unobserved measurement equivalence. Organizational Research Methods, 14, 147-176.
    56.Thissen, D., Steinberg, L., & Gerrard, M. (1986). Beyond group-mean differences: The concept of item bias. Psychological Bulletin, 99, 118-128.
    57.Thissen, D., Steinberg, L., & Wainer, H. (1993). Detection of differential item functioning using the parameters of item response models. In P. W. Holland & H. Wainer. (Eds.), Differential item functioning (pp. 67-113). Hillsdale, NJ: Lawrence Erlbaum Associates.
    58.Vermunt, J. K., & Magidson, J. (2005). Technical guide for Latent GOLD 4.0: Basic and advanced. Belmont MA: Statistical Innovations.
    59.Von Davier, M., & Yamamoto, K. (2004). Partially observed mixtures of IRT models: An extension of the generalized partial credit model. Applied Psychological Measurement, 28, 389-406.
    60.Wang, W. C., & Shin, C. L. (2010). MIMIC methods for assessing differential item functioning in polytomous items. Applied Psychological Measurement, 34, 166-180.
    61.Wang, W. C., Shin, C. L., & Yang, C. C. (2009). The MIMIC method with scale purification for detecting differential item functioning. Educational and Psychological Measurement, 69, 713-731.
    62.Wiberg, M. (2007). Measuring and detecting differential item functioning in criterion-referenced licensing test. (EM No. 60. Umea, Sweden: Umea University.
    63.Wollack, J. A., Cohen, A. S., & Wells, C. S. (2003). A method for maintaining scale stability in the presence of test speededness. Journal of Educational Measurement, 40, 307-330.
    64.Woods, C. M., Oltmanns, T. F., & Turkheimer, E. (2009). Illustration of MIMIC-model DIF testing with the schedule for nonadaptive and adaptive personality. Journal of Psychopathology and Behavioral Assessment, 31, 320-330.
    65.Zumbo, B. D., & Gelin, M. N. (2005). A matter of test bias in educational policy research: Bringing the context into picture by investigating sociological/community moderated (or mediated) test and item bias. Journal of Educational Research & Policy Studies, 5, 1-23.
    66.Zwick, R. (2012). A review of ETS differential item functioning assessment procedures: Flagging rules, minimum sample size requirements, and criterion refinement. Retrieved from http://www.ets.org/Media/Research/pdf/RR-12-08.pdf
    Description: 博士
    國立政治大學
    教育研究所
    98152501
    102
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0098152501
    Data Type: thesis
    Appears in Collections:[Department of Education] Theses

    Files in This Item:

    File SizeFormat
    250101.pdf459KbAdobe PDF2370View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback