政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/35257
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113160/144130 (79%)
Visitors : 50753797      Online Users : 705
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大典藏 > College of Commerce > Department of MIS > Theses >  Item 140.119/35257
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/35257


    Title: 以語意分析及Bloom理論為基礎之線上測驗輔助及智慧型評分系統
    A Study on Computer Aided Testing and Intelligent Scoring: Based on Semantic Analysis and Bloom`s Taxonomy
    Authors: 應鳴雄
    Ying, Ming-Hsiung
    Contributors: 楊亨利
    Yang, Heng-Li
    應鳴雄
    Ying, Ming-Hsiung
    Keywords: Bloom教育目標分類
    線上測驗系統
    本體論
    電子化學習
    填充題
    Bloom’s Taxonomy
    On-Line Testing
    Ontology
    E-Learning
    Fill-in-Blank Items
    Date: 2005
    Issue Date: 2009-09-18 14:34:12 (UTC+8)
    Abstract: 隨著電子化學習(E-Learning)環境技術的普及,線上學習與線上測驗已成為資訊教育的重要議題。但是因為填充題及問答題等測驗類型在線上測驗系統上實施有許多問題需克服,當線上測驗系統提供填充題及問答題等題型測驗時,將會產生嚴重的測驗評分等化(Equation)問題。目前線上測驗系統大多仍以是非題、單選題及複選題等題型為主,雖有少數線上測驗系統提供填充題及其他開放式填答的測驗類型,但仍未針對受測者填答之答案進行的語意自動評分。<br>另外,現有線上測驗系統未提供教師設定個人化的評分風格,對於多位教師共用測驗系統平台時所產生的評分規則認知衝突,系統也未提供支援與解決,為了解決上述問題,並使線上測驗能具備與傳統測驗相同的評量效力,本研究使用模糊理論、相似語意詞庫及人工智慧概念等,發展一個線上測驗及智慧評分子系統,此系統除了包括一般測驗系統所提供的是非、單選、複選等題型外,也包含採用智慧評分機制來評分的填充題,完成雛形系統的建置後,本研究再針對傳統紙筆測驗、一般型評分機制、及本研究的智慧型評分機制進行評分效力比較的實證研究。<br>此部分的實證研究結果顯示,在包含填充題型的測驗中,不同的評分機制在測驗成績的評分結果上會有顯著差異,而智慧型評分機制運作初期雖然可以減少與紙筆評分間的差異,並改善一般型評分機制的評分效力,但仍無法在統計上獲得具有相同評分效力的結果。但是智慧評分機制在擴充詞彙語意後,「已擴充語意後的智慧評分機制」與「紙筆評分」的評分結果並無顯著差異,其顯示出若在包括填充題型的線上測驗系統中加入具有擴充詞彙語意關係知識的功能,並提供多功能的智慧型模糊評分機制,允許教師輸入代表個人評分風格習慣的評分規則參數,則線上評分系統將有可能具有與紙筆評分相同的評分效力來處理具有填充題題型的測驗工作。<br>然而線上測驗並不只是在測驗後給予受測者一個分數而已,而應該讓學習者了解自己在知識向度及認知向度的學習結果,因此測驗系統的試題若能包含Bloom教育目標分類資訊,將促使測驗活動能給予學習者更大的幫助。為了降低教師製作試題的負擔,本研究也以本體論、詞彙網路、Bloom分類理論、中文語意庫、人工智慧為基礎,提出一個輔助教師產製題庫的系統架構,並使電腦所產製的試題能涵蓋新版Bloom認知領域教育目標分類中的知識向度及認知向度概念。本研究在電腦輔助教師產製題庫的成果上,不僅能減少教師人工出題的負擔,系統產製的試題也能評量事實、概念及程序等三種知識及記憶、了解、應用、分析及評鑑等五種認知向度能力。受限於線上測驗系統能自動評分的四種題型,本研究尚無法產製屬於創造認知層次的試題,但是卻已能產製出包含基本知識概念的試題,並能提供具有Bloom概念的測驗題庫來評量學習成效。此外,本研究亦針對電子化學習環境,提出適用於線上測驗系統的試題品質及評分等化能力評估概念模式,強調Bloom理論在線上測驗系統環境中的使用範圍與限制。此模式也透過測驗等化觀念,針對教育領域測驗理論在評估試題品質概念上提出新的觀點。
    Since the rapid E-learning development, the online learning and testing have been important topics of information education. Currently teachers still need to spend much time on creating and maintaining on-line testing item banks. Some researches have applied the new Bloom`s taxonomy to design meaningful learning assessments. This research has applied ontology, Bloom`s taxonomy, Chinese semantic database, artificial intelligence, semantic web, to design an on-line course learning system to assist teachers in creating test items.<br>Most of present on-line testing only has multiple choice items and true-false items. Though some provide fill-in-blank items, they can only recognize the answers either all right or all wrong through the simple computer binary pattern matching. In order to have the same evaluation effects as the traditional paper-and-pencil testing, this research will adopt the concepts of fuzzy theory, thesaurus, set, and artificial intelligence to develop the fuzzy scoring mechanism. The proposed on-line testing system will have true-false, multiple-choice, and fil-in-blank items. The latter will be graded through fuzzy judgment that is naturally endowed by the human teachers.<br>In addition, the past research indicated that e-learning students would learn more if provided appropriate feedback messages. In this research will add feedback messages to the proposed system according to different situations. The proposed on-line testing system will not only grade the test items, but also explain the answers and provide related materials to the testers.<br>The result of study are: (1) we could design the test items that would need a particular cognitive process to a particular type of knowledge, though we still could not have items to test “creative” level of cognitive process; (2) the test items could be used to assess the learning level meaningfully; (3) the computer would assist teachers to create a large number items, and save time of making item; (4) that different scoring mechanisms have a significant effect on test scores; (5) at the beginning, though our fuzzy on-line testing system is significantly better than the usual on-line testing system, it could not achieve the same effect as the paper-and-pencil testing; (6) after expanding semantic vocabularies from feedbacks, our fuzzy scoring mechanism is equivalent to paper-and-pencil.
    Reference: 中文部份:
    1. 中央研究院,民93年12月11日,中央研究院中英雙語知識本體詞網,http://bow.sinica.edu.tw/ont/。
    2. 中央研究院資訊科學研究所,民93年12月11日,中文詞知識庫網站,http://ckip.iis.sinica.edu.tw/CKIP/。
    3. 朱寧、曾海軍、董豔,民92,「電腦輔助評價(CAA)的發展現狀與趨勢」,教育技術研究,第6期。
    4. 李大偉,民84,技職教育測量與評鑑,再版,台北:三民書局。
    5. 李長峰、黃仁竑、許政穆,民91,「符合SCORM標準之Web-based教材編輯器」,2002網路學習理論與實務學術研討會論文集,107-112頁。
    6. 李宜玫、王逸慧、林世華,民93,「社會學習領域分段能力指標之解讀─由Bloom教育目標分類系統(修訂版)析之」,國立臺北師範學院學報,第17卷,第2期:1~34頁。
    7. 李坤崇,民88,多元化數學評量,台北:心理出版社。
    8. 何榮桂,民79,「電腦教學系統中的測驗設計」,中等教育,第41卷.第2期:29~34頁。
    9. 何榮桂、蘇建誠、郭再興,民85,「遠距適性測驗系統架構」,資訊與教育,第42期:29~35頁。
    10. 何榮桂,民88,「量身訂製的測驗-適性測驗」,測驗與輔導,第157期:3288~3293頁。
    11. 何榮桂、郭再興、蘇建誠、陳麗如,民88,「在Internet上建構測驗環境之可行性及相關問題之探討」,收錄於新世紀測驗學術發展趨勢,台北:心理出版社。
    12. 何榮桂,民89a,「遠距測驗與評量」,2000網路學習理論與實務研討會論文集,國立交通大學。
    13. 何榮桂,民89b,「遠距測驗及相關問題之探討」,2000網路學習理論與實務研討會論文集,國立交通大學。
    14. 沈晏仕、楊佳元、蘇育生、楊鎮華,民94a,「試題轉換與Bloom試卷產生工具」,2005台灣數位學習發展研討會,國立師範大學,On Line Available at http://kdelab.cis.nctu.edu.tw/TWELF2005/pdf/session1/1-1.pdf。
    15. 沈晏仕、黃國豪、林幸儀、陳豌珊、柯純妃、廖韋傑,民94b,「以Bloom分類理論為基礎之線上測驗評量系統」,2005台灣數位學習發展研討會,國立師範大學,On Line Available at http://kdelab.cis.nctu.edu.tw/TWELF2005/pdf/ session2/2-3.pdf。
    16. 沈晏仕、黃繼緯、李政偉、黃國禎、賴冠宏,民94c,「以Bloom分類理論為基礎之電腦化評量策略」,2005台灣數位學習發展研討會,國立師範大學,On Line Available at http://kdelab.cis.nctu.edu.tw/TWELF2005/pdf/session2/2-2.pdf。
    17. 考試院,民92年9月3日,「考試院新聞稿-國家考試將規劃實施電腦化測驗」,On Line Available at http://w3.moex.gov.tw/examnews/exnews_2.asp?pgn=1。
    18. 周文正,民87年,「WWW上電腦輔助測驗系統之研製」,中華民國第七屆電腦輔助教學研討會。
    19. 周倩、簡榮宏,民86年,「網路評量系統之發展與研究」,遠距教育,第4期:12-15頁。
    20. 吳明隆,民87,「以網路為主的教學環境內涵及規劃原則」,教育部電子計算機中心簡訊,第8712期:22-38頁。
    21. 林奇賢,民86年,全球資訊網輔助學習系統 : 網際網路與國小教育。資訊與教育,第58期:2-11頁。
    22. 林敏慧、陳美樺、管怡婷、郭榮學、陳慶帆,民90,「網路教學與傳統教學之差異與融合分析」,第五屆全球華人教育資訊科技大會,第二卷:1199-1202頁。
    23. 林璟豐,民90年6月,全球資訊網測驗題型之研究,國立師範大學工業科技教育研究所碩士論文。
    24. 林明達,民87,全球資訊網線上測驗系統之設計與製作,國立交通大學資訊科學研究所碩士論文。
    25. 洪榮昭,民81年3月,電腦輔助教學之設計原理與應用,師大書苑。
    26. 師大書苑,民83,心理與教育測驗精解,師大書苑。
    27. 張史如,民86,「從建構主義的觀點探討網路超文件/超媒體應用於教學上的意義」,資訊與教育,第58期,第39-48頁。
    28. 張家倩、楊國德,民87,「全球資訊網自學式課程之先導研究」,遠距教育,第7期:49~59頁。
    29. 財團法人語言訓練中心,民93年7月,「TOEFL測驗簡介」,On Line Available at: http://www.lttc.ntu.edu.tw/Toefl.htm。
    30. 陳英豪、吳裕益,民71,測驗的編制與應用,台北:偉文出版社。
    31. 陳李綢,民86,教育測量與評量,台北:五南。
    32. 陳年興、楊子青、賴宏仁,民 86,「以網際網路為基礎之學習環境」,電腦學刊,第2期:667-674頁。
    33. 陳筱菁,民93,以布魯姆認知分類修正版為基礎之計算機,國立台灣師範大學資訊教育研究所碩士論文。
    34. 曾憲雄、黃國禎、江孟峰、蔡昌均、林耀聰,民91,專家系統導論:導論/工具/應用,初版,台北:松崗。
    35. 翁瑞鋒、曾憲雄、鍾育茹、施南極,民94,「融入BLOOM認知理論之試題標準化研究-以數學科為例」,2005台灣數位學習發展研討會,國立師範大學,On Line Available at http://kdelab.cis.nctu.edu.tw/TWELF2005/pdf/session5/5-4.pdf。
    36. 孫光天、謝凱隆、鄭海東、陳新豐,民87,「智慧型線上適性測驗系統」,第七屆電腦輔助教學研討會:81~86頁,國立高雄師範大學。
    37. 黃居仁、張如瑩、蔡柏生,民92年10月,「語意網時代的網路華語教學-兼介中英雙語知識本體與領域檢索介面」,2003年第三屆全球華文網路教育研討會,台北,第24-26頁。
    38. 黃居仁,民92年3月,「語意網、詞網與知識本體:淺談未來網路上的知識運籌」,佛教圖書館館訊,第33期:1-16頁。
    39. 黃國禛,曾秋蓉,朱蕙君,蕭經武,民91,「智慧型線上測驗系統題型之分析與改進」,科學教育學刊,第10卷.第4期:423~439頁。
    40. 葉連祺,民 89,「教師自編紙筆式測驗試題類型之探討」,研習資訊,第17卷,第4期: 42-53頁。
    41. 葉連祺、林淑萍,民94年1月,「布魯姆認知領域教育部標分類修訂版之探討」,教育研究月刊,第105期:94-106。
    42. 楊亨利、金士俊,民90,「一個全球資訊網資料發掘的架構─以英語教學為例」,資訊管理學報,第7卷,第2期:143-181頁。
    43. 楊家興,民89,自學式教材設計手冊,心理出版社,初版,台北。
    44. 張紹勳、林騰蛟(民89,4月),虛擬大學實施之介紹,技術與職業教育雙月刊,56期:29-36頁。
    45. 游寶達,民87,「ICL心智模式取向之智慧型電腦輔助診斷學習系統之研究」,民國87年度國科會「電腦輔助學習」專題研究計畫。
    46. 鼎茂,民89,教育與心理測驗,台北:鼎茂出版社。
    47. 歐滄和,民91,教育測驗與評量,台北:心理出版社。
    48. 簡茂發,民88,「多元化評量之理念與方法」,教師天地,第99期:11-17頁。
    49. 藍月蓮,民92,一個數位教學架構的模式探討,南華大學資訊管理研究所碩士論文。
    英文部份:
    1. Alderson, J.C., “Technology in Testing: The Present and Future,” Elsevier Science Document System, 28, 2000, pp.593-603.
    2. Alessi, S.M. and Trollip, S.R., Computer-Based Instruction: Methods and Development, Englewood Cliffs. N. J.: Prentice-Hall, 2nd, 1991.
    3. Bandura, A., Social Foundations of Thought and Action, Prentice-Hall, Englewood Cliffs, NJ, 1986.
    4. Bannan-Ritland, B., Harvey, D. M., & Milheim, W. D., “A General Framework for the Development of Web-based Instruction,” Educational Media International, 35(2), 1998, pp. 77-81.
    5. Berners-Lee, T., Hendler, J. & Lassila, O., “The Semantic Web,” Scientific American, May 2001, pp. 29-37.
    6. Bennett, M.G., Hessinger, J. Kahn, H., Ligget, J. Marshall, G., and Zack, J., “Using Multimedia in Large-Scale Computer-Based Testing Programs,” Computers in Human Behavior (15), 1999, pp: 283-294.
    7. Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., and Krathwohl, D. R., A Taxonomy of Educational Objectives: Handbook 1, The Cognitive Domain, N.Y.:David Mckay Co, 1956.
    8. Borich, G. D. and Tombari, M. L., Educational Psychology: A Contemporary approach, Longman, 1997.
    9. Bunge, M., Treatise on Basic Philosophy: Vol.3: Ontology I: The Furniyure of the World, Boston, MA: Reidel, 1977.
    10. Bugbee, A.C., “The Equivalence of Paper-and-Pencil and Computer-Based Testing,” Journal of Research on Computing in Education (28:3), 1996, pp:282-299.
    11. Chen, K., Luo, C. Gao, Z., Chang, M., Chen, F., Chen, C. and Huang, C., “The CKIP Chinese Treebank: Guidelines for Annotation,” Dec 2004 Available from http://godel.iis.sinica.edu.tw/CKIP/treebank/.
    12. Clay, B., Is This a Trick Question? A Short Guide to Writing Effective Test Questions, USA: Kansas Curriculum Center, October 2001.
    13. Dempsey, J.V. and Wager, S.U., “A Taxonomy for Timing of Feedback in Computer Based Instruction,” Educational Technology, 28(10), 1988, pp. 20-25.
    14. Dempsey, J.V., Driscoll, M.P., and Swindell, L.K., Text-Based Feedback. Interactive Instruction and Feedback, New Jersey: Educational Technology Publications Englewwood Cliffs, 1993.
    15. Desai, B. C., “Supporting Discovery in Virtual Libraries,” Journal of the American Society for Information Science, 48(3), March 1997, pp.190-204.
    16. Devedzic, V.B., “Key Issues in Next-Generation Web-Based Education,” IEEE Transactions On Systemsm, Man, And Cybernetics-PART C: Applications And Reviews, 33(3), Aug 2003, pp.339-349.
    17. Dick, W. and Carey, L., The Systematic Design of Instruction, 3rd ed., Glenview, IL: Scott, Foresman, 1990
    18. Fellbaum, C., WordNet: An Electronic Lexical Databas, MIT Press, Cambridge, MA, 1998.
    19. Fensel, D., Harmelen, F., Horrocks, I., McGuinnesm, D.L. and Patel-Schneider, P.F., “OIL: Ontology Infrastructure to Enable the Semantic Web,,” IEEE Intelligent Systems, 16(2), 2001, pp. 38-45.
    20. Flavell, J. H., “Speculation about the nature and development ofmetacognition,” in Weiner, F. E. & R. H. Kluwe (Eds), Metacognition, motivation, and understanding, Hillsdale, NJ: Erlbaum, 1987.
    21. Gagne, R. M., Briggs, L. J., & Wager, W. W., Principles of instructional design , 4th, Fort Worth: Harcourt Brace.
    22. Galusha, J.M., “Barriers to Learning in Distance Education. Interpersonal Computing and Technology,” An Electronic Journal for the 21st Century, 5 (3/4), 1997, pp.6-14.
    23. Gilman, D.A., “Comparison of Several Feedback Methods for Correcting Errors by Computer-Assisted Instruction,” Journal of Applied Psychology, 60, 1969, pp.503-508.
    24. Gronlund, N.E., Assessment of Student Achievement, Needham Heights, M.A.: Allyn & Bacon, 1998.
    25. Guarino, N., “Formal Ontology and Information System,” in Guarino, N. (ed.), Formal Ontology in Information Systems, Proc. Of the 1st International Conference, Trentom Italy, 6-8, IOS Press (amendee version), 1998, pp.3-15.
    26. Haladyna, T. M., Developing and validating Multiple-Choice Test Items, 2nd (ed.). Mahwah, NJ: Lawrence Erlbaum Associates, 1999.
    27. Han, T., Kolen, M., and Poglmann, J., “A Comparison Among IRT True-and Observed-Score Equatings and Traditional Equipercentile Equating,” Applied Measurement in Education, 10(2), 1997, pp: 105-121.
    28. Hill, J. R., & Hannafin, M. J., Cognitive Strategies and Learning From the World Wide Web, Education Technology Research and Development, 45(4), 1997, pp:37-64.
    29. Holsapple, Clyde & Whinston, Andrew, “Software Tools for Knowledge Fusion,” Computerworld, 17(15), Apr 11, 1983, pp.11-18.
    30. Huang, C-R, Chen, K-J, Chen, F-Y, Chen, K-J, Gao, Z-M and Chen, K-Y, “Sinica Treebank: Design Criteria, Annotation Guidelines, and On-line Interface,” Proceedings of 2nd Chinese Language Processing Workshop, Hong Kong, October 7, 2000, pp.29-37.
    31. Hwang, G-J, “A Conceptual Map Model for Developing Intelligent Tutoring Systems,” Computers & Education, 40, 2003, pp.217-235.
    32. Jang, J. S. R., Sun, C, T., & Mizutani, E., Neuro-Fuzzy and Soft Computing – A Computational Approach to Learning and Machine Intelligence, Upper Saddle River, New Jersey: Prentice-Hall, 1997.
    33. Khan, B.H., Web-Based Instruction: What Is It and Why Is It? Web-Based Instruction, N.J.: Educational Technology Publications, Inc., 1997.
    34. Krathwohl, D. R., “A revision of Bloom’s taxonomy: An overview,” Theory Into Practice, 41(4), 2002, pp.212-219.
    35. Kulik, J.A. and Kulik, C.C., “Timing of Feedback and Verbal Learning,” Review of Educational Research, 58(1), 1988, pp.79-97.
    36. Labrou, Y. and Finin, T., “Yahoo! as an Ontology: using Yahoo! Categories to Describe Documents,” Proceedings of the 8th International Conference on Information and Knowledge Management, Kansas City, Missouri, 1999.
    37. Langley, P., Wogulis, J., and Ohlsson, S., “Rules and Principles in Cognitive Diagnosis,” in Frederiksen, N., Glaser, R., Lesgold, A., and Shafto, M.G. (Eds.), Diagnostic Monitoring of Skill and Knowledge Acquisition, Hillsdale, NJ: Erlbaum, 1990, pp. 217-250.
    38. Liu, H-F, Yu, L., Liu, F. and Bao, S., “Evaluation of the Web-Based Learning System -The Basic of Digital Media Communication”, Proceedings of the International Conference on Computers in Education/International Conference on Computer-Assisted Instruction, 2000, pp.1238-1241.
    39. Lord, F. M., Applications of Item Response Theory to Practical Problems, Hillsdale, N. J.: Erlbaum Publishers, 1980.
    40. Mark, D.R., “The Next Generation of Computerized Tests: Implications for Testing of Advances in Multimedia, Intelligent Tutoring Systems, and Language Processing,” AEDS Journal (19:2), 1997, pp: 81-108.
    41. Marshall, S.P., ”The Assessment of Schema Knowledge for Arithmetic Story Problems: A Cognitive Science Perspective,” in G. Kulm (Ed.), Assessing Higher Order Thinking in Mathematics, Washington: American Association for the Advancement of Science, 1993, pp.155-168.
    42. McBeath, R.J., Instructing and evaluating in higher education. A guidebook for planning learning outcomes, Educational Technology Publications, Englewood Cliffs, New Jersey, 1992.
    43. McCormack, D. and Jones, D., Building a Web-Based Education System, N.Y.Wiley, 1997.
    44. McGreal, R., “Integrated Distributed Learning Environments (IDLEs) on the Internet: A Survey,” Education Technology Review, 2, 1998, pp.25-31.
    45. Mckenna, C., Bull, J., “Desiging Effective Objective Test Questions: An Introductory Workshop,” CAA Centre, Loughborough University, June 17, 1999. Available from http://www.caacentre.ac.uk/dldocs/otghdout.pdf.
    46. Moundridou, M. and Virvou, M., “Analysis and Design of a Web-Based Authoring Tool Generating Intelligent Tutoring Systems,” Computer & Education (40), 2003, pp.157-181.
    47. NAVEDTRA 134, Navy Instructor Manual, Pensacola, FL: Chief of Naval Education and. Training, 1992.
    48. Nonaka, I. & Takeuchi, H.., The Knowledge Creating Company, N.Y.: Oxford University Press, 1995.
    49. Peterson, B.K. and Reider, B.P., “Perceptions of Computer-Based Testing: A Focus on the CFM examination,” Journal of Accounting, 20, 2002, pp: 265-284.
    50. Quinn, L.,“HTML 4.0 Reference,” W3C Organization, Dec 2004, Available from http://www.htmlhelp.com/reference/html40/.
    51. Raggett, D., “HTML 3.2 Reference Specification,” W3C Organization, Dec 2004, Available from http://www.w3.org/TR/REC-html32/.
    52. Rosenberg, M. J., E-Learning: Strategies for Delivering Knowledge in the Digital Age, AcGraw-Hill, 2001.
    53. Rowntree, D., Assessing students. How shall we know them? Harper & Row, London, 1977.
    54. Russell, S.J. and Norvig, P., Artifical Intelligence: A Modern Approach, Prentice Hall, Upper Saddle River, New Jersey, 2nd edition, 2003.
    55. Shih, Y-C and Chen, N-S, “Design and Evaluation of Constructivist Web-based Instructional Systems,” Proceedings of the International Conference on Computers in Education/International Conference on Computer-Assisted Instruction, 2000, pp.500-504
    56. Sugrue, B. and Kobus, R.C., Beyond Information: Increasing the Range of Instructional Resources on the World Wide Web, Teachrends, 1997.
    57. Sun, K. T., “An Effective Item Selection Method by Using AI Approaches,” The Meeting of the Advanced in Intelligent Computing and Multimedia System, Baden-Baden, Germany, 1999.
    58. Swafford, M. and Brown, D., “MallardTM:Asynchronous Learning on the World-Wide Web,” Proceedings of the ASEE 96 Conference, Washington, DC, June 23-26, 1996.
    59. Thomas, J., Allman, C., and Beech, M., Assessment for the diverse classroom: A handbook for teachers, Florida Department of Education, 2004.
    60. Tsai, T. H., Hanson, B.A., Kolen, M.J., and Forsyth, R.A., “A Comparison of Bootstrap Standard Errors of IRT Equating MeTHODS FOR THE Common-Item Nonequivalent Groups Design,” Applied Measurement in Education (14:1), 2001, pp: 17-30.
    61. Uschold, M. and Gruninger, M., “Ontologies: Principles, Methods and Applications,” The Knowledge Engineering Review, 11(2), 1996, pp.93-136.
    62. Van der Linden, W. J., and Glas, C.A.W., Computerized Adaptive Testing: Theory and Practice, Dordrecht; Boston: Kluwer Academic, 2000.
    63. Van Gorp, M.J., and Boysen, P., ”ClassNet: Managing the virtual classroom,” International Journal of Educational Telecommunications (3:2), 1997, pp: 279-292.
    64. Wainer, H. and Dorans, N.J., Computerized Adaptive Testing: A Primer, Mahwah, N. J.:Lawrence Erlbaum Association, 2nd ed., 2000.
    65. Weibel, S., “A Proposed Convention for Embedding Metadata in HTML,”Dec 2004, Available from http://www.w3.org/Search/9605-Indexing-Workshop/ ReportOutcomes/S6Group2
    66. Weibel, S., Godby J., and Miller E., “OCLC/NCSA Metadata Workshop Report,” Dec 2004, Available from http://www.ifla.org/documents/libraries/ cataloging/oclcmeta.htm.
    67. Weiss, D. J., Proceedings of the 1979 Computerized Adaptive Testing Conference, Minneapolis: University of Minnesota, 1980.
    68. West, C. K., Farmer, J. A., & Wolff, P. M., Instructional design: Implications from cognitive science, Needham Heights, MA: Allyn and Bacon, 1991.
    69. Woolfolk, A. E., Educational Psychology, 5th ed., Boston: Allyn & Bacon, 1993, pp.459-463.
    70. Winograd, T., and Flores, F., Understanding Computers and Cognition, Addison-Wesley Publishing Company A New Foundation for Design, Ninth printing, Dec. 1994.
    71. Yang, H. L., “Demand for More MIS Empirical Research,” MIS Review, (5), 1995, pp.49-64.
    72. Zadeh, L. A., “Fuzzy Sets,” Inform.Control (8), 1965, pp: 338-353.
    Description: 博士
    國立政治大學
    資訊管理研究所
    89356501
    94
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0893565011
    Data Type: thesis
    Appears in Collections:[Department of MIS] Theses

    Files in This Item:

    File Description SizeFormat
    56501101.pdf77KbAdobe PDF2837View/Open
    56501102.pdf175KbAdobe PDF2932View/Open
    56501103.pdf157KbAdobe PDF21032View/Open
    56501104.pdf219KbAdobe PDF21208View/Open
    56501105.pdf525KbAdobe PDF22032View/Open
    56501106.pdf1594KbAdobe PDF21416View/Open
    56501107.pdf1634KbAdobe PDF21387View/Open
    56501108.pdf558KbAdobe PDF2918View/Open
    56501109.pdf319KbAdobe PDF21386View/Open
    56501110.pdf1023KbAdobe PDF21357View/Open
    56501111.pdf276KbAdobe PDF21425View/Open
    56501112.pdf118KbAdobe PDF21516View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback