版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領
文檔簡介
1、<p> 2100單詞,1.2萬英文字符,3600漢字</p><p> 出處:Duke-Williams E, King T. Using computer-aided assessment to test higher level learning outcomes[J]. © loughborough university, 2001.</p><p><
2、;b> 外文原文</b></p><p> USING COMPUTER AIDED ASSESSMENT TO TEST HIGHER LEVEL LEARNING OUTCOMES</p><p> Terry King and</p><p> Emma Duke-Williams</p><p> Usin
3、g Computer-Aided Assessment to Test Higher</p><p> Level Learning Outcomes</p><p> Terry King and Emma Duke-Williams</p><p> Department of Information Systems</p><p>
4、; University of Portsmouth</p><p> Buckingham Building</p><p> Portsmouth PO1 3HE</p><p> Email: terry.king@port.ac.uk</p><p> Tel: +44 (0) 23 9284 6426</p>
5、<p><b> Abstract</b></p><p> This paper sets out an approach using a revised Bloom's taxonomy of learning objectives for the careful design of objective questions to assist in the ass
6、essment of higher learning outcomes (HLO’s) and details the creation and evaluation of a variety of such questions. This has been done within the context of two constraints; the use of popular, commercially available com
7、puter-aided assessment (CAA) software, specifically Question Mark Perception and Half-Baked Hot Potatoes, and the assumption o</p><p><b> Keywords</b></p><p> Computer-aided Assess
8、ment, Bloom’s Taxonomy, Learning Objectives, Objective Testing, Computer-based Testing</p><p> Background</p><p> Students in higher education engage in a variety of learning activities with t
9、he aim of attaining certain defined learning outcomes. As students progress to Level 3 and post-graduate work, it is accepted that they engage increasingly in activities designed to develop skills and abilities which are
10、 considered to be of a higher cognitive complexity (Zakrzewski and Steven, 2001). Holzl (2000) places emphasis on the development of certain graduate attributes in the cognitive domain such as critical </p><p&
11、gt; This paper sets out an approach using a revised Bloom's taxonomy of learning objectives for the careful design of objective questions to assist in the assessment of higher learning outcomes, and details the crea
12、tion and evaluation of a variety of such questions. This has been done within the context of two constraints; the use of popular, commercially available CAA software and the limited availability of learning technician/te
13、chnologist support. It assumes an environment in which the authors an</p><p> Classification of Learning Outcomes</p><p> Bloom's taxonomy of learning objectives has been chosen as the fra
14、mework for approaching the problem of assessing for higher learning outcomes (Bloom et al,1956). The reason for this follows Delgano (1998) who considered other schemes for classifying learning outcomes but came down in
15、favour of Bloom's taxonomy because it had sufficiently detailed categories to allow outcomes to be mapped clearly onto learning activities, and was in widespread use so designers did not have to learn an additional&l
16、t;/p><p> Bloom's taxonomy of learning objectives (Bloom et al, 1956) attempted to classify forms of learning into three categories: cognitive, affective and psychomotor domains. Within the cognitive domai
17、n, Bloom identified six levels of learning which represented increasing levels of cognitive complexity from the lowest level of Knowledge (or remembering) through Comprehension, Application, Analysis,Synthesis and Evalua
18、tion. Each level encompassed those below it, so, for example,analysis could only occu</p><p> Objective Question Design for Higher Learning Outcomes</p><p> Advice is available for the design
19、of effective objective tests by providing general guidelines for question design, and employing grids and matrices to plot content against learning levels and outcomes (Heard et al,1997; Rolls and Watts, 1998).However mu
20、ch less direction is given on how to design the assessment questions themselves. Because of this, the design of objective questions to test higher learning outcomes (HLO's) often follows one of three approaches:</
21、p><p> Derivation from the verbs associated with HLO's. This can misleading. For example,any question with the verb 'judge' is assumed to be an Evaluation level question, but in fact students are a
22、sked to judge on a variety of criteria, and, if the criteria is understanding of a theory, then the question will be at a Comprehension level. Often such criteria are not made explicit to students so that they are left c
23、oping with ambiguity</p><p> Extrapolation from existing subjective examination questions. This can result in objective questions which are unclear as to which learning level they apply. This is hardly surp
24、rising as examination questions are generally not subjected to the rigorous examination applied to objective questions and exactly what they assess can be open to debate. Use of exemplars. For example, those described by
25、 Mackenzie (1999), Carneson et al. (no date), and Heard et al.(1997) are effective, but are limited bec</p><p> None of these are particularly successful. What is needed is a systematic framework for positi
26、oning questions for particular learning outcomes.</p><p> A framework which offers many possibilities is the revision to Bloom’s taxonomy by Anderson and Krathwohl (2001) and co-workers which results in the
27、 basic table shown in Figure 1. The six levels remain but each has been replaced by its matching verb – Remember, Understand, Apply, Analyse, Evaluate and Create – in order to facilitate the writing of learning objective
28、s. Create is now the last and highest level of learning as they consider that evaluation is a necessary step which precedes any gene</p><p> The three higher learning levels which were initially of main int
29、erest for constructing CAA questions were subdivided in detail:</p><p> Analyse - encompassing differentiating or distinguishing, organising or structuring, and deconstructing (which concerns determining th
30、e values underlying presented material).</p><p> Evaluate - which breaks down into the two processes of checking for internal consistency, and critiquing which involves judging against external criteria.<
31、;/p><p> Create - which involves generative processes such as hypothesizing, planning,designing, and producing or constructing.</p><p> Figure 1: Modified basic table of Bloom's learning obje
32、ctives (Anderson and Krathwohl,2001) showing the distribution of objective questions with HLO's used in the paper.</p><p> The table in Figure 1 has been extended by the addition of a knowledge dimensio
33、nto help 'educators distinguish more closely what they teach' and by implication what they are assessing. This dimension is detailed in the columns on the left of Figure 1,with the different forms of knowledge co
34、vering:</p><p> Factual Knowledge. The basic details of the content of a course which students must know to make sense of the discipline, divided into knowledge of terminology or specific details and elemen
35、ts. Itemised knowledge before interrelationships are considered here.</p><p> Conceptual Knowledge. Encompassing the knowledge of classifications and categories, principles and generalisations, theories, mo
36、dels and structures.Procedural Knowledge. This considers the knowledge of subject-specific skills,algorithms, techniques and methods, and also the knowledge of the criteria used in determining when to use specific proced
37、ures. It is the knowledge of 'how to do something'.</p><p> Metacognitive Knowledge. That class of knowledge by which students know how they come to know and learn. It includes the conscious applica
38、tion of cognitive strategies by students and their own self-knowledge of learning strengths and styles.</p><p> In using this framework for the construction of objective questions to test HLO's,certain
39、areas of the table in Figure 1 have been excluded, viz. meta cognitive knowledge which is not subject-specific, and the two lowest learning levels of learning which are not applicable to this investigation. </p>&
40、lt;p> Also it was the original intention to include the highest level of learning 'Create' within the study. Anderson and Krathwohl (2001) are quite specific in the activities which are included within this l
41、evel of learning, viz. generating, planning and producing. Each of the many results of such activities ie. alternative hypotheses, research plans, designed procedures, an invention or a construction, are by their very na
42、ture both unique and equally valid. There must be many 'correct' answers, whic</p><p> Figure 2: The student is required to drag symbols into specified locations to 'construct' a working dia
43、gram for a given problem.</p><p> Figure 3: The student is asked to correct an incorrect layout according to specific criteria using drag-and-drop.</p><p> However further consideration of the
44、se questions against the framework lead to the conclusion that they were not at the highest learning level of create but essentially an analysis of material with a view to determining how elements are organised or (re)st
45、ructured. In view of this it was decided to omit further consideration of the create learning outcome and to aim questions largely at the levels of analyse and evaluate, with some more exploration of apply level question
46、s. The latter have the s</p><p> 22 objective questions for assessing HLO's were constructed for implementation in two commercially available CAA software packages, Question Mark Perception and Half-Bak
47、ed Hot Potatoes. The questions were devised for formative assessment for two course units on the MSc in Information Systems, one for basic multimedia theory, and the other for educational theory underpinning the developm
48、ent of computer-aided learning. The questions fell into three categories:</p><p> A. 13 questions devised solely using the revised Bloom's framework.</p><p> B. 6 questions devised by usin
49、g exemplars.</p><p> C. 3 questions adapted directly from past exam papers in multimedia theory.</p><p><b> 譯文:</b></p><p> 計算機輔助評估來測試高等水平的學習成果</p><p>
50、特里金和艾瑪杜克-威廉姆斯</p><p> 用電腦輔助評估來測試高層次的學習成果</p><p> 特里金和公爵艾瑪-威廉姆斯</p><p> 樸次茅斯大學信息系統(tǒng)系</p><p> 電子郵件:terry.king @ port.ac.uk</p><p> 電話:+44(0)23 9284 6426&
51、lt;/p><p><b> 摘要</b></p><p> 本文陳述了一種改進的Bloom學習目標分類法,即對客觀問題進行精心設計去輔助評估高等教育成果,并詳細說明了對各種此類問題的創(chuàng)新和評估。該方法受到普遍使用的商用輔助評估軟件的制約和受知覺、學習和技術專家的支持。它研究的問題是HLO客觀題所固有的設計(專門的應用,分析和評價的水平),首先它介紹了該系統(tǒng)的設計,然
52、后對該系統(tǒng)做出整的框架。它所研究得是建設模式和兩個研究生的信息系統(tǒng)課程項目,主要涉及的是兩批學生,共37個,和形成性評價設計等22個客觀題的評價。該方法首先是在民航局進行試運行,然后再反饋。這個過程中在評分和交付方面出現(xiàn)了軟件中關鍵部分的分歧。本文主要考察的是統(tǒng)計指標質(zhì)量問題(設施和歧視指數(shù)),和通過學生的訪談從而得出最好的問題類型以及學生準備測試、備考要注意的關鍵問題。本文總結了并與考試的主要優(yōu)點以及在使用HLO的民航局的資源這一過程
53、中存在的缺點。</p><p><b> 關鍵詞</b></p><p> 電腦輔助評估,Bloom的分類,學習目標,目標測試,計算機輔助測試</p><p><b> 背景</b></p><p> 受高等教育的學生都致力于各種學習,從而獲得一些特定的學習成果。當學生進步到第3等級或研究生
54、的工作時,他們從事于設計,開發(fā)技能和提高認知復雜性(Zakrzewski和Steven,2001年)能力等活動。 Holzl(2000年)強調(diào)注重畢業(yè)生的發(fā)展,尤其是在批判性思維領域和知情判斷的認知領域,以及深入的知識領域中所涉及的信息管理組織和分析的能力,跨學科的觀點和解決問題要求的評估和創(chuàng)造。然而這些屬性的發(fā)展需要適當?shù)膶W習活動和目的的評估。因此,高等教育體系中出現(xiàn)了某些需要的評估體系。在評估中,首先要促進以上幾種能力的發(fā)展從而形成
55、自我評估,以及相關的評估,然后要尋找一種在相關知識領域中實踐的估計方法。通過參與這些活動,接受適當?shù)姆答?,學生不僅可以學習到知識還可以取得一定的進步。傳統(tǒng)的反饋是導師和學生面對面或學生組員之間的反饋,但是由于學生人數(shù)的增加和教育資源配置日益稀少,使得這種方式的反饋越來越少。在總結性評估中,要進一步的對學生專業(yè)素養(yǎng)的評估能力進行測評。傳統(tǒng)的基于紙張的書面考試涉及到標記,主觀性,偏見等因素的影響,因此,題型重點放在客觀題上,從而查看這種評估
56、模式是否適用于復雜的認知能力相</p><p> 本文陳述了一種改進的Bloom學習目標分類法即對客觀問題進行精心設計去輔助評估高等教育成果并詳細說明了對各種此類問題的創(chuàng)新和評估。該方法受商用電腦輔助評估軟件的普遍使用的制約和問題標記 知覺、學習和技術專家假設的支持。它假定一個環(huán)境,然后在這個環(huán)境中使作者和許多其他講師發(fā)現(xiàn)自己,在這過程中,他們需要全心全意地投入問題的創(chuàng)造過程,熟練地應用軟件應用程序,但是他們不
57、需要任何的編程技能。</p><p><b> 分類學習成果</b></p><p> Bloom的學習目標分類已被選定作為評估較高學習成果(Bloom等人,1956年)的框架。其原因如下:Delgano(1998)雖研究的是其他分類學習結果的計劃,但卻完全贊同了Bloom的目標分類。因為Bloom的分類方法有足夠詳細的分類,能讓結果清楚地被映射到所對應的學習活動
58、,Bloom的分類法的廣泛使用使得設計者不再需要額外的學習計劃。在樸次茅斯大學,評估策略與Bloom分類完全地聯(lián)系在一起,其中后者尤其重要。在制定學習目標時,要竭盡全力地熟悉講師和Bloom分類。</p><p> Bloom的分類學習目標(Bloom等人,1956年)試圖把學習形式分為三類:認知,情感和精神領域。在認知領域,Bloom確定了六個學習級別,從知識的最低級別(記?。╅_始,通過理解,應用,分析,綜合
59、與評估逐漸地認知復雜性。每個級別都包含它下面的所以級別,舉例來說,分析只有在具有認識事實或理解其他知識的能力時才可能發(fā)生。最低的三個級別被認為高等教育水平的'基礎思想'(Ryan和Frangenheim,2000)。每個級別相關聯(lián)的某些學習成果表現(xiàn)為'動詞',如回憶,繪制,計算,分類,設計或評估。更高的學習成果將通過更復雜的學習認知程度來反映。人們常常想當然地認為提供一個正確答案的測試只適用于學習的最低水
60、平。雖然這從來都是錯的,但在客觀測試中,最高的三個級別是分析,綜合和評價,現(xiàn)在可以考慮更為合適的等級適用測試的進展。然而這從來都是錯的,隨著民航局的進步和發(fā)展,測試的三個最高水平-分析、綜合和評估現(xiàn)在具有一定的適用性。</p><p> 高等教育學習成果客觀問題設計</p><p> 使用提供問題設計的一般準則和采用網(wǎng)格和矩陣來提高學習水平(1997年的內(nèi)容; Rolls和Watts,
61、1998 )的建議可用于有效地客觀的測試設計,但是很少根據(jù)評估問題本身而設計的。正因為如此,測試更高的學習成果(HLO的)客觀題的設計,經(jīng)常是如下三種方法之一:</p><p> 從HLO衍生相關的動詞。任何與動詞'判斷'有關的問題都假定為評估水平的問題,這可能會產(chǎn)生誤導,但實際上學生們要從各種標準進行判斷,如果標準是理論的理解,那么問題將會是在理解級別上。通常這種標準并未明確,所以學生只能根據(jù)
62、現(xiàn)有的主觀試題來進行模糊地推斷。</p><p> 現(xiàn)有的主觀試題的外推法。這可能會產(chǎn)生不明確的學習等級問題。客觀問題的試題一般不會受到嚴格的考試制約,他們的評估可以公開地辯論。</p><p> 使用范例。如由麥肯齊(1999),Carneson等人所描述的,聽取客觀模式是有效的,但也是有限得,因為,當產(chǎn)生客觀題類型的模板時,他們不再提供正式的講師和設計方法來創(chuàng)造設計新問題的格式。&
63、lt;/p><p> 這些都不是特別成功。我們需要的是一個特定的學習成果的定位問題以及系統(tǒng)的框架。</p><p> 一個框架,可以提供許多可能性,這是由安德森和Krathwohl(2001年)和同事對Bloom的分類法的修訂,基本表如圖1所示。六個級別雖然存在,但是每個都被其匹配的動詞取代 - 記住,理解,應用,分析,評估和創(chuàng)新 - 為了便于學習目標的確立。現(xiàn)在創(chuàng)新是學習的最后和最高的級
64、別,與此同時他們認為評價也是一個必要的步驟,它先于任何生成過程。</p><p> 這三個較高的學習等級最初是以民航局建設問題的主要核心,同時對它進行了詳細的劃分:</p><p> 分析 - 包括鑒別或區(qū)分,組織或結構,與解構(即有關決定提出的價值觀基礎材料)。評估-可以分解成兩個進程,一個是檢查內(nèi)部的一致性,另一個是判斷外部標準。創(chuàng)建-這一層次中涉及到諸如假設,規(guī)劃,設計和生產(chǎn)或建
65、設的生成過程。</p><p> 圖1:改良的Bloom的學習目標基本表(Anderson和Krathwohl,2001年),它顯示了本文中HLO的文件中客觀題的分布。</p><p> 圖一表中知識層面的延長使得教育工作者更加緊密地分辨他們教的是什么,并提醒他們正在評估什么。圖1的左側通過涵蓋知識的不同形式詳細闡述了這一層面。</p><p> 事實性知識。
66、即課程內(nèi)容的基本資料,學生必須知道如何劃分術語或知識的具體細節(jié)和要素以及理解他們之間的關系。</p><p> 理性認識。涵蓋了分類和類別,原則和概括,理論,模式和結構的知識。</p><p> 程序性知識。這被認為個別學科的技巧,算法,技術和方法的知識,也是在確定何時使用特定的程序所使用的標準的知識。它是如何做的知識。</p><p> 元認知知識。這類知識
67、使得學生知道他們是如何認識和學習的。它包括學生自覺地運用認知策略和自我學習知識的優(yōu)勢和風格。</p><p> 用客觀題構建的框架來測試HLO時,圖一表中的某些領域已經(jīng)被排除在外,即未特定的元認知知識和兩個不適用這項調(diào)查學習的最低學習水平,但它最初是包括學習'創(chuàng)新'這一最高級別的。</p><p> Anderson和Krathwohl(2001年)對學習級別進行了相當
68、具體的研究,它包含生成,規(guī)劃和生產(chǎn)活動。替代假設,研究計劃,設計程序,一項發(fā)明或構建,每一個這些活動的成果都具有其本身的獨特性和同等的效力。在客觀測試中必須有許多'正確的答案,但它涉及這個外部的客觀測試的職權范圍,需要一個(且只有一個)完全有效的響應級別,并有可能使民航局內(nèi)能自動標記。學生可以拖動可選標記到模板(圖2)或改善問題中不良的子構建布局(圖3),這樣進行一系列的反復的探索和嘗試。</p><p>
69、; 圖2:學生根據(jù)給定的問題,拖動符號到指定的位置,從而構建工作圖。</p><p> 圖3:要求學生根據(jù)拖和放的具體標準來改正不正確的布局。</p><p> 通過對框架這一問題的進一步思考,得出它們不在創(chuàng)造這一學習級別上的結論以及如何對元素進行重組的分析。</p><p> 鑒于此,決定省略了創(chuàng)建學習成果的進一步審議,把目標主要放在分析和評估水平的問題上
70、,并在應用層次問題上進行更多的探索。應用層次上的探索是熟悉的執(zhí)行任務的子類別或是應用程序上陌生任務的實施 。</p><p> 在Question Mark Perception和 Half-Baked Hot Potatoes這兩個商用軟件的構建執(zhí)行中,應用了22道評估HLO的客觀問題。這些客觀問題是為這兩個信息系統(tǒng)項目的形成性評估而設計的,其中一個項目是多媒體基礎,另一個是電腦輔助學習的發(fā)展理論。而這些問題
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 計算機輔助評估來測試高等水平的學習成果【外文翻譯】
- 計算機輔助評估【外文翻譯】
- 外文翻譯---計算機輔助設計和計算機輔助(cadcam)
- 在線工作手冊計算機輔助評估系統(tǒng)【外文翻譯】
- 外文翻譯---計算機輔助制造
- 計算機專業(yè)外文翻譯----計算機視覺中的學習
- 普通話水平計算機輔助測試
- 計算機輔助普通話水平測試指南
- 普通話水平計算機輔助測試
- 山東計算機輔助普通話水平測試
- 外文翻譯--計算機輔助汽車設計.doc
- 計算機cai教學輔助系統(tǒng)外文翻譯
- 外文翻譯--計算機輔助汽車設計.doc
- 外文翻譯--計算機輔助分析和設計金屬板料成形過程(節(jié)選)
- 外文翻譯--計算機輔助設計、計算機輔助制造及其應用.doc
- 外文翻譯--計算機輔助設計、計算機輔助制造及其應用.doc
- 外文翻譯--計算機輔助設計、計算機輔助制造及其應用.doc
- 安徽計算機輔助普通話水平測試應試
- 二、計算機輔助普通話水平測試流程
- 二、計算機輔助普通話水平測試流程
評論
0/150
提交評論