English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113648/144635 (79%)
Visitors : 51622139      Online Users : 557
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/143334


    Title: 2018 至 2022 年學測和指考英文閱讀測驗困難題目之分析
    Analysis of Difficult English Reading Comprehension Test Items in the GSAT and AST from 2018 to 2022
    Authors: 黃品瑄
    Huang, Pin-Hsuan
    Contributors: 尤雪瑛
    Yu, Hsueh-Ying
    黃品瑄
    Huang, Pin-Hsuan
    Keywords: 學測
    指考
    英文閱讀測驗
    閱讀能力
    難題分析
    GSAT
    AST
    English reading comprehension section
    Reading constructs
    Item difficulty analysis
    Date: 2022
    Issue Date: 2023-02-08 15:21:36 (UTC+8)
    Abstract: 本研究旨在探討近五年(2018-2022)學測與指考英文閱讀測驗試題中,困難題所測的閱讀能力,以及影響該題目難度之可能因素。研究採用質性分析法,以 Karakoc (2019)修正版之閱讀能力表為依據,分析 28 道困難題。並用影響難度之可能變因來分析困難題及其配合的文章。四項變因包括:「文章可讀性」、「文章體裁」、「問題類型」、「誘答力」。研究結果顯示,困難題常考的四種閱讀能力為:「掌握文章細節」、「掌握文章主旨」、「辨認段落主旨及細節」以及「由上下文猜測詞意」(由多至寡排列)。有關難題因素分析的結果為:「文章可讀性」和「文章體裁」不是影響答題難度,「問題類型」和「誘答力」較可能影響題目難度。以問題類型之來看,可能的困難來自題幹敘述不清楚、選項經過改述、答題線索分散,亦或是有些問題需要資訊整合或推論的能力。從誘答力來看,誘答項會測驗「句子間的邏輯關係」或是給予「和學生基模有關但和文章無關」之資訊,
    且其在結構、內容或用字上與正答相似。綜上因素皆會交錯影響閱讀測驗答題之難度。本研究於文末提出與閱讀教學和未來研究有關之建議,希冀此試題分析得以為英語教學帶來正向影響。
    The present study aims to investigate reading constructs measured in the difficult English reading comprehension test items of the GSAT and AST from 2018 to 2022, and to
    explore possible factors contributing to their difficulties.
    A qualitative item analysis was conducted. A total number of 28 difficult test items were analyzed by the revised Karakoc’s (2019) constructs list. Furthermore, the difficult items and their accompanying reading passages were examined by four difficulty predictor variables: (a) Readability, (b) Text Structure Types, (c) Question Types, and (d) Plausibility of Distractors. One of the findings indicated that four types of reading constructs were commonly
    measured in the difficult items. The most frequently tested reading construct was “Understanding facts, details and specific information” followed by “Understanding a main
    idea and general information,” “Identifying both main idea and the details of a paragraph,” and “Guessing a meaning of an unknown word from the context.”Regarding the potential factors of item difficulty, the results showed that “Readability” and “Text Structure Types” might not be critical factors contributing to item difficulty, while
    “Question Types” and “Plausibility of Distractors” might be. For textually-explicit items, their difficulty resulted from three factors, ambiguity of the stem, paraphrased options, and distance between clues. For textually-implicit items, they were difficult because they often required students to synthesize information and make inferences. As for distractors, they were plausible when they included students’ misconceptions or showed similarity with the correct answer. In the study, two misconceptions related to “logic relations between sentences” and “irrelevant schemata” were found. Three types of distractor similarity in terms of structure, content, and word were also found. All the above-mentioned factors would come into play to
    influence students’ difficulty in answering reading comprehension questions.
    Reference: Alderson, C. J., & Alderson, J. C. (2000). Assessing
    reading. Cambridge University Press.
    Alderson, J.C., and Lukmani, Y. (1989). Cognition and reading: cognitive levels as embodied in test questions. Reading in a Foreign Language, 5(2), 253-270.
    Alonzo, J., Basaraba, D., Tindal, G., & Carriveau, R. S. (2009). They read, but how well do they understand? An empirical look at the nuances of measuring reading
    comprehension. Assessment for Effective Intervention, 35(1), 34-44.
    Ascalon, M. E., Meyers, L. S., Davis, B. W., & Smits, N. (2007). Distractor similarity and item-stem structure: Effects on item difficulty. Applied Measurement in Education, 20(2), 153-170.
    Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.
    Baghaei, P., & Ravand, H. (2015). A cognitive processing model of reading comprehension in English as a foreign language using the linear logistic test model. Learning and Individual Differences, 43, 100-105.
    Bailin, A., & Grafstein, A. (2016). Readability: Text and context. Springer.
    Barton, M. L. (1997). Addressing the literacy crisis: Teaching reading in the content areas. National Association of Secondary School Principals, 81(587), 22-30.
    Beck, L.L., McKeown, M.G., Sinatra, G.M., and Loxterman, J.A. (1991). Revising social studies text from a text-processing perspective: evidence of improved comprehensibility. Reading Research Quarterly, 26(3), 251-276.
    Best, R. M., Floyd, R. G., & McNamara, D. S. (2008). Differential competencies contributing to children`s comprehension of narrative and expository texts. Reading Psychology, 29(2), 137-164.
    Brown, H.D. (2001). Teaching by principles: An interactive approach to language pedagogy. 2nd ed. New York: Longman.
    Brown, H.D. (2004). Language assessment: Principles and classroom practices. New York: Pearson Education.
    Buck. G. (2001). Assessing Listening, pp. 115-153. Cambridge University Press.
    Carrell, P. L. (1987). Readability in ESL. Reading in a Foreign Language, 4, 21-40.
    Carrell, P. L., Devine, J., & Eskey, D. E. (Eds.). (1988a). Interactive approaches to second language reading. Cambridge University Press.
    Carrell, P. L. (1988b). Some causes of text-boundedness and schema interference in ESL reading. In P. L. Carrel, J. Devine, & E. Eskey (Eds.), Interactive approaches to second language reading (pp. 103-113). New York: Cambridge
    University Press.
    Case, S. M., & Swanson, D. B. (1998). Constructing written test questions for the basic and clinical sciences (2nd ed., pp. 22-25). Philadelphia: National Board of Medical Examiners
    Chikalanga, I. (1992). A suggested taxonomy of inferences for the reading teacher. Reading in a Foreign Language, 8(2), 697-709.
    Chen, H.C (2009). An analysis of the reading skills measured in reading comprehension tests on the Scholastic Achievement English test (SAET) and the Department Required English Test (DRET). Unpublished Master Thesis. Taipei:
    National Taiwan Normal University.
    Clinton, V., Taylor, T., Bajpayee, S., Davison, M. L., Carlson, S. E., & Seipel, B.(2020). Inferential comprehension differences between narrative and expository
    texts: a systematic review and meta-analysis. Reading and Writing, 33(9), 2223-2248.
    Collins, J. (2006). Writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographics,26, 543–551.
    College Entrance Examination Center. (2016a). 107 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/QbjXXM
    College Entrance Examination Center. (2016b). 107 AST English test-preparation guide. Retrieved from: https://reurl.cc/aGkVRX
    College Entrance Examination Center. (2019). 111 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/qNOL33
    Davey, B. (1988). Factors Affecting the Difficulty of Reading Comprehension Items for Successful and Unsuccessful Readers. The Journal of Experimental Education, 56(2), 67–76.
    Davis, F. B. (1944). Fundamental factors of comprehension in reading.Psychometrika, 9(3), 185-197.
    Dale, E. & Chall, J. (1948). A formula for predicting readability. Educational Research Bulletin, 27, 37–54.
    Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32,221-233.
    Fuchs, L. S. (2002). Examining the Reading Difficulty of Secondary Students with Learning Disabilities. Remedial & Special Education, 23(1), 31-41.
    Gough, P. B. (1972). One second of reading. Visible Language, 6(4), 291-320.
    Goodman, K.S. (1967). Reading: A psycholinguistic guessing game. In H. Singer & R. Ruddell (Eds.), Theoretical models and processes of reading. Network, Delaware: International Reading Association.
    Grabe, W. (1991). Current Developments in Second Language Reading Research.TESOL Quarterly, 25(3), 375–406.
    Grabe, W., & Stoller, F. L. (2002). The nature of reading abilities. In Teaching and researching reading, 9-39. Boston: Pearson Education.
    Grabe, W. (2002). Narrative and expository macro-genres. Genre in the classroom: Multiple perspectives, 249-267.
    Gray, W.S. & Leary, B.E. (1935). What makes a book readable. Chicago: University of Chicago Press.
    Gray, W.S. (1960). The major aspects of reading. In H. Robinson (ed.), Sequential development of reading abilities (Vol. 90, pp. 8-24). Chicago: Chicago University Press.
    Graesser, A. C., McNamara, D. S., & Louwerse, M. M. (2003). What do readers need to learn in order to process coherence relations in narrative and expository text?
    Rethinking reading comprehension (pp. 82–99). New York: Guilford Press.
    Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23.
    Hsu, W.L. (2005). An analysis of the reading comprehension questions in the JCEE English test. Unpublished Master Thesis. Kaohsiung: National Kaohsiung
    Normal University.
    Huang, T.S. (1994). A qualitative analysis of the JCEE English tests. Taipei: The Crane Publishing Company.
    Hudson, T. (1996). Assessing second language academic reading from a communicative competence perspective: Relevance for TOEFL 2000. Princeton, NJ: Educational Testing Service.
    Hughes, A. (2003) Testing for language teachers. New York: Cambridge University Press.
    Jamieson, J., Jones, S., Kirsch, I., Mosenthal, P., & Taylor, C. (2000). TOEFL 2000 framework. Princeton, NJ: Educational Testing Service.
    Jang, E.E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to language assessment. Language Testing, 26, 031-073.
    Jeng, H. S. (2001). A comparison of the English reading comprehension passages and items in the College Entrance Examinations of Hong Kong, Taiwan and Mainland China. Concentric: Studies in Linguistics, 27(2), 217-251.
    Jeng, H., Chen, L. H., Hadzima, A. M., Lin, P. Y., Martin, R., Yeh, H. N., ... & Wu, H. C. (1999). An Experiment on Designing English Proficiency Tests of Two
    Difficulty Levels for the College Entrance Examination in Taiwan. In Second International Conference on English Testing in Asia held at Seoul National University, Korea, included in the Proceedings (pp. 12-38).
    Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: individual differences in working memory. Psychological Review, 99(1), 122.
    Karakoc, A. I. (2019). Reading and Listening Comprehension Subskills: The Match between Theory, Coursebooks, and Language Proficiency Tests. Advances in Language and Literary Studies,10(4), 166-185.
    Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch
    reading ease formula) for navy enlisted personnel. Naval Technical Training Command Millington TN Research Branch.
    Kirby, J. R. (1988). Style, strategy, and skill in reading. In Learning strategies and learning styles (pp. 229-274). Springer, Boston, MA.
    Kirsch, I. S., & Mosenthal, P. B. (1990). Exploring document literacy: Variables underlying the performance of young adults. Reading research quarterly, 5-30.
    Klare, G. R. (1968). The role of word frequency in readability. Elementary English, 45(1), 12-22.
    Lai, H., Gierl, M. J., Touchie, C., Pugh, D., Boulais, A., & De Champlain, A. (2016). Using automatic item generation to improve the quality of MCQ distractors. Teaching and Learning in Medicine, 28, 166–173.
    Lan, W. H., & Chern, C. L. (2010). Using Revised Bloom`s Taxonomy to analyze reading comprehension questions on the SAET and the DRET. 當代教育研究季刊, 18(3), 165-206.
    Lin, C. (2010). Evaluating readability of a university Freshman EFL reader. Studies in English for professional communications and applications, 75-88.
    Livingston, S. A. (2009). Constructed-Response Test Questions: Why We Use Them; How We Score Them. R&D Connections. Number 11. Educational Testing Service.
    Lu, J. Y. (2002). An analysis of the reading comprehension test given in the English Subject Ability Test in Taiwan and its pedagogical implications. Unpublished
    Master’s Thesis. Taipei: National Chengchi University.
    Meyer, B. J. (2017). Prose Analysis: Purposes, Procedures, and Problems 1. In Understanding expository text (pp. 11-64). Routledge.
    McCarthy. (1991). Discourse analysis for language teachers / Michael McCarthy. Cambridge University Press.
    McCormick, S. (1992). Disabled readers` erroneous responses to inferential comprehension questions: Description and analysis. Reading Research Quarterly, 27(1), 55-77.
    McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276-282.
    McNamara, D. S., Graesser, A. C., & Louwerse, M. M. (2012). Sources of text difficulty: Across genres and grades. Measuring up: Advances in how we assess reading ability, 89-116.
    Mitkov, R. (2003). Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing (pp. 17-22).
    Mitkov, R., Varga, A., & Rello, L. (2009). Semantic similarity of distractors in
    multiple-choice tests: extrinsic evaluation. In Proceedings of the workshop on geometrical models of natural language semantics (pp. 49-56).
    Mikulecky, B. S., & Jeffries, L. (2007). Advanced reading power: Extensive reading,vocabulary building, comprehension skills, reading faster. White Plains, N. Y.:
    Pearson
    Munby, J. (1978) Communicative Syllabus Design. Cambridge, Cambridge University Press.
    Nemati, M. (2003). The relationship between topic difficulty and mode of discourse: An in-depth study of EFL writers production, recognition, and attitude. Iranian
    Journal of Applied Linguistics, 6(2), 87-116.
    Nuttall, C. (2005). Teaching Reading Skills in a foreign language. (3rd ed.) Macmillan Education
    Ozuru, Y., Rowe, M., O`Reilly, T., & McNamara, D. S. (2008). Where`s the difficulty in standardized reading tests: The passage or the question? Behavior Research
    Methods, 40(4), 1001-15.
    Perkins, K., & Brutten, S. R. (1988). An item discriminability study of textually explicit, textually implicit, and scripturally implicit questions. RELC Journal,
    19(2), 1-11.
    Rumelhart, D. E. (1977). Toward an interactive model of reading. In S. Dornic (Ed.), Attention and performance VI, (pp. 573-603). Hillsdale, NJ: Erlbaum.
    Sáenz, L. M., & Fuchs, L. S. (2002). Examining the reading difficulty of secondary students with learning disabilities: Expository versus narrative text. Remedial
    and Special Education, 23(1), 31-41.
    Seddon, G.M. (1978). The properties of Bloom’s taxonomy of educational objectives for the cognitive domain. Review of Educational Research,48(2), 303-323.
    Spencer, M., Gilmour, A. F., Miller, A. C., Emerson, A. M., Saha, N. M., & Cutting, L. E. (2019). Understanding the influence of text complexity and question type on reading outcomes. Reading and writing, 32(3), 603-637.
    Stenner, A. J. (1996). Measuring reading comprehension with the Lexile framework.
    Smith, R. L., & Smith, J. K. (1988). Differential use of item information by judges using Angoff and Nedelsky procedures. Journal of Educational Measurement, 25, 259–274.
    Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis.
    BMC Medical Education, 9(40), 1–8. Testa, S., Toscano, A., & Rosato, R. (2018). Distractor efficiency in an item pool for a statistics classroom exam: assessing its relation with item cognitive level classified according to Bloom’s taxonomy. Frontiers in psychology, 9, 1585.
    Towns, M. H. (2014). Guide to developing high-quality, reliable, and valid multiple-choice assessments. Journal of Chemical Education, 91, 1426–1431.
    Urquhart, A. H., & Weir, C. J. (2014). Reading in a second language: Process, Product and Practice. Routledge.
    Vacc, N. A., Loesch, L. C., & Lubik, R. E. (2001). Writing multiple-choice test items.In G. R. Walz & J. C. Bleuer (Eds.), Assessment: Issues and challenges for the
    millennium (pp. 215–222). Greensboro, NC: ERIC Clearinghouse on Counseling and Student Services.
    Wangru, C. (2016). Vocabulary teaching based on semantic-field. Journal of Education and Learning, 5(3), 64-71.
    Warrens, M. J. (2015). Five ways to look at Cohen`s kappa. Journal of Psychology & Psychotherapy, 5(4), 1.
    Weir, C., Hawkey, R., Green, A., & Devi, S. (2012). The cognitive processes underlying the academic reading construct as measured by IELTS. IELTS collected papers, 2, 212-269.
    Weir, C.J., & Porter, D. (1996). The multi-divisible or unitary nature of reading: The language tester between Scylla and Charybdis. Reading in a Foreign Language, 10, 1-19.
    Wolf, D. F. (1993). Issues in reading comprehension assessment: Implications for the development of research instruments and classroom tests. Foreign Language Annals, 26(3), 322-331
    Description: 碩士
    國立政治大學
    英國語文學系
    109551015
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109551015
    Data Type: thesis
    Appears in Collections:[英國語文學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    101501.pdf1685KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback