Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probably more than you ever wanted to know

Similar presentations


Presentation on theme: "Probably more than you ever wanted to know"— Presentation transcript:

1 Probably more than you ever wanted to know
Assessment is a huge topic, covering all types of assessment measures and times to perform different types of assessment, and for different purposes. This looks at different types of evaluation – diagnostic, formative, confirmative, summative, direct and indirect. Assessments serve many purposes.  They can: Guide instructional decision making Diagnose learning and performance problems Determine what students have learned Promote self-regulation Promote learning (recent studies have shown that students learn more when they are tested) Determine whether students understand and apply what they have learned Determine whether students are employable Assessment

2 Assessment Why do we assess? When do we assess? Who do we assess?
What do we assess? All these questions will be answered.

3 Types of Assessment Diagnostic Formative Summative Confirmative
Used to determine the starting point of instruction Formative Used to improve instruction Used to provide remediation Summative Used to determine effectiveness of unit or program Used to assign grades Confirmative Used to determine lasting effects Diagnostic: typically before the student begins class or a program of study. Used to determine strengths and weaknesses. Formative: this is probably what we think of when we give weekly quizzes but we don’t often use these weekly quizzes or even midterms for the purpose of improving instruction. One thing that each instructor could do is to analyze the overall pattern of scores/grades. Are most people passing? What percentage is failing? Is there one question that nearly everyone misses? This could indicate that more instruction is needed in an area (topic) or that the test is not adequate. Students can receive feedback about how they are doing. Summative: typically performed at or near the end of a program, term, course of study, unit, whatever. These types of assessments can help you determine whether your instruction was sufficient and effective. It can help you evaluate your materials, lesson plans, etc. to improve learning. Confirmative: this is done 6 – 12 months after a program is over. Did participants retain any information? Are they using it or have they fallen back on old behaviors? For example, when you do the hand-washing lesson and how students should not touch the doors after they’ve washed their hands, how long do they avoid touching the doors? Six months later, are they still using paper towels to open the door? It tests the endurance of the outcomes, the return on investment and establishes the effectiveness, efficiency, impact and value of the training over time (Wiley, )

4 Formal & Informal Assessment
Informal Evaluation: spontaneous, what you may observe in a class Formal Evaluation: planned, assesses predetermined content Includes “pop” quizzes Informal vs. Formal Informal assessments are spontaneous, typically what the educator observes on a day-to-day basis; it's possible to learn different things about different students.   Informal observation during instruction Student questions can guide instruction; too fast/slow, correct misunderstandings, etc. A formal assessment is planned and assesses predetermined content.

5 Qualities of Good assessment
Must be valid and reliable Measures what it is supposed to measure Measures it consistently Assessment measures should be congruent with: Intended learning outcomes Course objectives Learning activities and content Everything should match. For example, if you are assessing a blood draw, you need to use an instrument that accurately assesses a blood draw, such as a predefined checklist. Reliability The extent to which the assessment instrument yields consistent results for each student. How much are students' scores affected by temporary conditions unrelated to the characteristic being measured (test-retest reliability)? Do different people score students' performance similarly (scorer reliability, also known as interrater reliability)? Do different parts of a single assessment instrument lead to similar conclusions about a student's achievement (internal consistency reliability)? Standardization The extent to which assessment procedures are similar for all students. Are all students assessed on identical or similar content? Do all students have the same types of tasks to perform? Are instructions the same for everyone? Do all students have similar time constraints? Is everyone's performance evaluated using the same criteria? Validity The extent to which an assessment instrument measures what it is supposed to measure. Does the assessment tap into a representative sample of the content domain being assessed (content validity)? Do students' scores predict their success at a later task (predictive validity)? Does the instrument measure a particular psychological or educational characteristic (construct validity)? Practicality The extent to which an assessment is easy and inexpensive to use. How much class time does the assessment take? How quickly and easily can students' responses be scored? Is special training required to administer or score the assessment? Does the assessment require specialized materials that must be purchased? Characteristics of Good Assessment Begins early in the planning process Has a clearly articulated plan and timetable Focuses squarely, though not exclusively, on explicit program objectives and intended outcomes Includes both formative and summative components Has the firm commitment of program leaders Includes all internal stakeholders Employs multiple measures to ensure the trustworthiness of all findings and conclusions Collects data in a continuous and ongoing manner Facilitates regular communication among stakeholders Establishes formal and informal mechanisms to ensure data is used in the decision-making process Is flexible enough to quickly respond to evolving needs and circumstances Employs capable expertise internally, when possible, and externally, if needed Sees assessment as a vehicle for improvement and not just an external obligation Gathers, utilizes and disseminates data in accordance with recognized ethical standards Accepts that students and other participants are not experimental subjects Generates convincing evidence of program results and organizational effectiveness

6 Revised Taxonomy Remember Understand Apply Analyze Evaluate Create
Factual Knowledge Conceptual knowledge Procedural knowledge Meta-cognitive knowledge 4 knowledge categories and 6 cognitive process categories Meta-cognitive knowledge is often overlooked Knowledge of learning strategies, cognitive tasks, strengths and weaknesses, etc. Portfolio is good for this, as is reflection Using taxonomy table can broaden the list of objectives Including more complex outcomes contributes to the transfer of learning and reinforces remembering of facts, concepts, procedures, and strategies. Assessments designed to measure ILOs beyond remembering must contain some novelty Psychomotor taxonomy: observation, imitation, practice, adaptation

7 Indirect Assessment Can give us information quickly
Provides indications, but no evidence Based on perceptions Examples: Course evaluations Number of student hours spent on homework, class time, etc. Grades that are not based on explicit criteria related to clear learning goals Indirect assessment is gathering information through means other than looking at actual samples of student work. These include surveys, exit interviews, and focus groups. Students may think they did well or learned much, but their perceptions could be wrong. Indirect Assessment is gathering information about student learning by looking at indicators of learning other than student work output. This assessment approach is intended to find out about the quality of the learning process by getting feedback from the student or other persons who may provide relevant information. It may use surveys of employers, exit interviews of graduates, focus groups, or any number of Classroom Assessment Techniques (e.g. minute papers, muddiest point papers or one sentence summaries). Both indirect and direct approaches provide useful information in improving student learning. Indirect assessment can gives us immediate feedback which can be employed in a course to bring direct improvement to student learning. For example, a minute paper can reveal that students are confused on a particular idea and can lead to direct follow-up with additional instruction and review. Unfortunately indirect assessment does not provide reliable evidence that learning objectives have been achieved. The use of surveys and focus groups may lead to improvements in a program but do not directly provide evidence of student learning. Students in exit surveys may believe that they have improved their abilities in critical thinking, but that is not proof that they have achieved that objective. The survey is an indirect assessment measure, it does not provide evidence that our students have achieved the standards of critical thinking skill that we expect of our graduates. If students responded that they did not believe they had learned critical thinking skills we would certainly be concerned with that perception, but the belief that they had learned such skills is not evidence of goal achievement. Indirect assessments gave indications of learning success, but no evidence. We may improve learning by following the information provided by indirect assessment but it does not prove that learning has achieved our expected standards. We can learn from indirect assessment but we must also use direct assessment to provide real evidence that learning has been achieved. Concepts adapted from: Direct vs. Indirect Assessment Methods - Assessment Handbook. (n.d.) Department of Assessment, Skidmore College.

8 Indirect Assessment Indirect Measures at Program Level
Job placement Course enrollment information Surveys Focus groups Indirect Measures at Institutional Level Annual reports including institutional benchmarks Studies that examine patterns of course selection or grading Indirect assessment is based on an analysis of reported perceptions about student mastery of learning outcomes. The perceptions may be self-reports by students, or they may be made by others, such as alumni, fieldwork supervisors, employers, or faculty. - Mary J. Allen, October 2007 Standardized Self-Report Surveys: Select standardized tests that are aligned to your specific program learning outcomes.  Score, compile, and analyze data.  Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses. Focus Groups: are a series of carefully planned discussions among homogeneous groups of 6-10 respondents who are asked a carefully constructed series of open-ended questions about their beliefs, attitudes, and experiences.  The session is typically recorded and later the recording is transcribed for analysis.  The data is studied for major issues and reoccurring themes along with representative comments. Exit Interviews: Students leaving the University, generally graduating students are interviewed or surveyed to obtain feedback.  Data obtained can address strengths and weaknesses of an institution or program and/or to assess relevant concepts, theories or skills. Interviews: are conversations or direct questioning with an individual or group of people.  The interviews can be conducted in person or on the telephone.  The length of an interview can vary from 20 minutes to over an hour.  Interviewers should be trained to follow agreed-upon procedures (protocols). Surveys: are commonly used with open-ended and closed-ended questions.  Closed-ended Questions require respondents to answer the question from a provided list of responses.  Typically, the list is a progressive scale, ranging from low to high or strongly agree to strongly disagree. Classroom Assessment: is often designed for individual faculty who wish to improve their teaching of a specific course.  Data collected can be analyzed to assess student learning outcomes for a program.

9 Direct Assessment Looking at actual samples of student work
Capstone projects, papers, performances, etc. Students must demonstrate, produce or represent their learning WASC prefers direct measures “The difference between direct and indirect measures of student learning has taken on new importance as accrediting agencies such as WASC have required the use of direct measures to be the primary source of evidence. Indirect measures may serve only as supporting evidence.” To produce evidence that we have achieved our General Education goals in the area of Critical Thinking (Higher Order Thinking) we have to employ Direct Assessment measures. We could employ standardized exit exams or evaluate student critical thinking by the analysis of Student Portfolios employing shared rubrics to judge the performance in the area of critical thinking. Concepts adapted from: Direct vs. Indirect Assessment Methods - Assessment Handbook. (n.d.) Department of Assessment, Skidmore College.

10 Direct Assessment Course level Course and homework assignments
Exams and quizzes; Standardized tests Term papers and reports Observations of externship performance, clinical experiences Research projects Class discussion participation Case study analysis Rubric scores for writing, presentations, & performances Grades based on explicit criteria related to clear learning goals Direct Measures (Students demonstrate an expected learning outcome) Scoring Rubrics: can be used to holistically score any product or performance such as essays, portfolios, recitals, oral exams, research reports, etc.  A detailed scoring rubric that delineates criteria used to discriminate among levels is developed and used for scoring. Capstone Courses: could be a senior seminar or designated assessment course.  Program learning outcomes can be integrated into assignments. Performance expectations should be made explicit prior to obtaining results Case Studies: involve a systematic inquiry into a specific phenomenon, e.g. individual, event, program, or process.  Data are collected via multiple methods often utilizing both qualitative and quantitative approaches. Embedded Questions to Assignments: Questions related to program learning outcomes are embedded within course exams.  For example, all sections of “research methods” could include a question or set of questions relating to your program learning outcomes.  Faculty score and grade the exams as usual and then copy exam questions and scores that are linked to the program learning outcomes for analysis.  The findings are reported in the aggregate. Standardized Achievement Tests: Select standardized tests that are aligned to your specific program learning outcomes.  Score, compile, and analyze data.  Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses. Locally developed exams with objective questions: Faculty create an objective exam that is aligned with program learning outcomes.  Performance expectations should be made explicit prior to obtaining results. Locally developed essay questions: Faculty develop essay questions that align with program learning outcomes.  Performance expectations should be made explicit prior to obtaining results Reflective Essays: generally are brief (five to ten minutes) essays on topics related to identified learning outcomes, although they may be longer when assigned as homework.  Students are asked to reflect on a selected issue.  Content analysis is used to analyze results. Performance expectations should be made explicit prior to obtaining results Collective Portfolios: Faculty assemble samples of student work from various classes and use the “collective” to assess specific program learning outcomes.  Portfolios can be assessed by using scoring rubrics; expectations should be clarified before portfolios are examined. Observations: can be of any social phenomenon, such as student presentations, students working in the library, or interactions at student help desks.  Observations can be recorded as a narrative or in a highly structured format, such as a checklist, and they should be focused on specific program objectives.

11 Direct Assessment Program Level Capstone projects
Pass rates on licensure, certification or subject area tests Supervisor ratings of students' performance

12 Direct Assessment Institutional level
Performance on tests of writing, critical thinking, or general knowledge Rubric scores for class assignments Performance on achievement tests Explicit self-reflections on what students have learned related to institutional programs such as service learning (e.g., asking students to name the three most important things they have learned in a program) These and preceding examples were provided by Skidmore College at They obtained their information from Middle States Commission on Higher Education. Student Learning Assessment: Options and Resources. Chapter 3, Evaluating Student Learning: 27-53; 2007.

13 Indirect Assessment: Surveys
Well known, widely used Adaptable to different kinds of questions Can use different formats Multiple choice, true false, short answer, rating scales, etc. Software available to create surveys easily Response rate may be low, which reduces reliability Can over-survey I just took a few points from the pros and cons. These are primarily from Barbara Wright’s document online titled Indirect Assessment. She works for WASC as the Associate Director of ACSCU. Student, alumni, employer and faculty-staff surveys and questionnaires that provide information about students’ or others’ perceptions of students’ educational experiences and the institutions’ impact on their learning. Alumni questionnaires and surveys provide a retrospective view of graduates’ educational experience and create an opportunity for them to recommend improvements in education based on what is relevant to their current employment, profession, or graduate education. Faculty-staff surveys or questionnaires provide perceptions of student learning—that is what students are able and not able to demonstrate based on classroom-based assessments or observations of student behavior.

14 Indirect Assessment: interviews
One on one conversations to elicit information Can be highly structured More personal Allows for more probing Can reveal the “why” and “how” Labor intensive Require skilled interviewers Taken from Barbara Wright, Associate Director at WASC

15 Indirect: Focus Groups
Structured in-depth group discussions Adaptable to wide variety of groups and subjects Can reveal the “why” and “how” May reveal new insights Cheaper than individual interviews Problems: scheduling, sensitive issues, incentives Focus groups with representative students to probe a specific issue that may have been identified in a survey or identified in patterns of student performance as a result of formative or summative assessments Van Aken and collaborators describe the successful design of a focus group to obtain perceptions of historically under-represented students enrolled in an engineering program (1999).

16 Indirect: ethnographic research
Selected students gather information about learning or student experience through conversations with fellow students Insider perspective In-depth and longer term study Time consuming Need training and regular monitoring Risk of attrition of student participants Info is shared with faculty and analyzed. Students are chosen to participate in conversations or interactions with other students and to report on what they have learned. These students need to be carefully trained and monitored.

17 Direct or Indirect? Classroom research (usually direct)
Course management programs (usually direct) Focus groups (usually indirect) Student self-assessment (usually direct) Portfolios straddle the line Direct – work provides evidence of knowledge/skills Indirect – reflective essays (perception) Self-assessment is one I would have considered indirect but Wright explains it as “because the performance of self-assessment demonstrates how skilled students are at self-assessment.” Focus groups can be direct if the topic is an issues in the major and students are guided to demonstrate their command of the disciplinary theories, concepts and methods. Course management programs enable faculty to capture discussions and other evidence that would be ephemeral in the classroom. Classroom research can demonstrate what students know and can do, but they also can elicit reflection and perceptions.

18 Direct Assessment: Portfolios
Collections of student work (and sometimes other material) intended to illustrate achievement of learning outcomes. "collect, select, reflect, connect.“ Adaptable to different levels, purposes, materials Emphasize judgment Can be labor intensive Need carefully defined criteria, rubrics Advantages: • Are adaptable to different levels of assessment (i.e. individual student, program, institution) purposes (i.e. cross-sectional snapshot; change/progress over time) kinds of materials (i.e. written work, tapes of performances, student self-assessments) Can tell us where student are and how they got there Emphasize human judgment, meaning-making Provide information likely to be used Have become extremely popular, hence an easy sell Engage students, faculty Are educational for both students and faculty Reduce fears of misuse Can be managed by students -to some extent Are supported by many different software programs Disadvantages: Can be labor-intensive Can be cumbersome to store, navigate through Must relate contents to articulated outcomes Require carefully defined criteria for review, e.g. rubrics Require training for reviewers Require distinguishing between usefulness of the portfolio for students (e.g., to showcase work, impress prospective employers, inform advisors) and for assessment of learning Solutions/responses: Collect samples of work, not everything from everybody Use electronic storage and retrieval Give students responsibility for maintaining the portfolio Invest in outcomes, because they're the basis for everything anyway Invest in good criteria for education's sake Invest in training for faculty development's sake

19 Direct Assessment: Capstones
A wide variety of culminating projects, assignments, performances, or even experiences (externships) Cumulative and integrative Can demonstrate skills, major, general education, institutional outcomes, etc. May require an additional course Must be flexibility as well as commonality Labor intensive Advantages: Are cumulative Are integrative • Are adaptable to demonstration of skills general education professional field or major dispositions institutional outcomes combinations Are motivating for students Set standards for degree completion, graduation Provide an occasion for department-level discussion, interpretation Invite external evaluation • Help students make the transition to self-assessment professional assessment life-long learning Disadvantages: Pose challenge of capturing all students in their final year/semester Differences within/among majors demand flexibility plus commonality May mean an additional course requirement Require coordinating multiple dimensions of learning & assessment Can be labor-intensive Must relate to carefully articulated outcomes Require carefully defined criteria for review, e.g. rubrics Require distinguishing between purpose of the capstone for students and for program assessment Solutions/responses: Require the capstone for graduation Introduce as widely as possible across the institution Include capstone experiences within existing courses Provide resources, staff support View resources, labor, as worthwhile investment

20 Emphasis on active learning, “practical” intelligence
Direct: Performances Activities designed to demonstrate specific outcomes, i.e. a poster presentation, conduct of a class, facilitation of a group discussion, "think aloud" analysis of a text. Emphasis on active learning, “practical” intelligence Require careful definition of criteria, careful training of reviewers, coordination/scheduling of reviewers Advantages: Have face validity in terms of preparation for student's real-life goals • Put emphasis on what the student can do (as opposed to knowing about): require application, problem-solving are integrative may require spontaneous adaptation provide a reality check Give students with practical intelligence, skills, a chance to shine Can elicit affective outcomes, e.g. poise, grace under pressure Are motivating, encourage practice, rehearsing Put the emphasis on active learning Promote coaching relationship between students and faculty, especially when there are external reviewers Promote self-assessment, internalization of standards Are highly adaptable, even to liberal arts Disadvantages: Can be labor-intensive, time-consuming, expensive Must relate to articulated outcomes Require careful definition of criteria, e.g. rubrics Require careful training of reviewers, including external reviewers Require coordination, scheduling, esp. of external reviewers May frighten off insecure students Solutions/responses: Review a sample of students Embed in routine, non-threatening situations (e.g., internship, clinical setting) Use digital means to make performances accessible to reviewers Regard outcomes, criteria, and training as an educational investment Remind students they must demonstrate employability

21 Direct: Embedded Assessments
These include Common assignments, template assignments, and secondary readings student work produced in response to a course assignment is examined for multiple purposes, e.g., to determine command of course material but also to assess writing skill, information literacy, critical thinking, etc. "Common assignments": the same assignment across multiple courses; "template assignments": the same format but not identical assignment across multiple courses "Secondary readings": student work is examined "secondarily" for other qualities beyond command of course material. Advantages: Use work produced by students as a normal part of their course work Solve the problem of quality of student effort Are efficient, low-cost Have face validity Provide maximally useful information for improvement with minimum slippage Encourage discussion, collaboration among faculty & support staff Can create campus-wide interest Disadvantages: Require considerable coordination Can be time-consuming to create, implement Can be time-consuming, labor-intensive to score Must be designed in relation to specific outcomes Require careful definition of criteria for review, e.g., rubrics Require careful training of reviewers Solutions/responses: Focus on what's important Use "common questions" if an entire common assignment is impractical Regard outcomes, criteria, and training as an educational investment Provide support, "teaching circles' to discuss implementation, findings Remember the efficiencies, benefits Make the investment

22 Direct: Course management Programs
Software that allows faculty to set up chat rooms, threaded discussions, etc., and capture student responses Example would be eCollege platform or computer-based activities (VN) Low cost, efficient, adaptable Quiet students can shine Heavy reliance on writing Advantages: Are adaptable to wide range of learning goals, disciplines, environments Use work produced electronically by students as a normal part of course participation Record threaded discussions, chat, ephemera that are impossible or cumbersome to capture face to face Give quiet students an opportunity to shine Can preserve a large volume of material, allow sorting, retrieval, data analysis Are efficient, low-cost Are unintrusive Solve the problem of quality of student effort Allow prompt feedback Develop students' metacognition when assessment results are shared Often include tests, quizzes, tasks as part of package, supporting multiple-method approach, convenience Disadvantages: Rely heavily on student writing skill, comfort with technology Pose challenges to higher levels of aggregation beyond individual course or student May discourage collaboration among faculty, staff, programs Managing large volume of material can be difficult, intimidating "No significant difference" bias may short circuit improvement Tests, quizzes may promote recall, surface rather than deep learning Built-in survey tools encourage collection of indirect rather than direct evidence Direct observation of student performances is difficult or impossible Software may drive the assessment effort, instead of assessment goals and values driving choice, use of the software Solutions/responses: Develop good, focused outcomes, criteria, rubrics Use built-in data management tools Supplement if necessary, e.g. with "The Rubric Processor" Invest in training of faculty, external reviewers Use tests, quizzes with caution, supplement with authentic tasks Negotiate with the maker, customize the software Aim for program-level, not just individual or course-level improvement

23 Direct: Classroom Research
Provides a large collection of techniques individual instructors can use in their classrooms to discover what students are learning or not ~ and to make rapid adjustments (Wright, 2009). Conducted continuously Provides feedback on student’s knowledge and on what helps or hinders learning Unstructured and depends on cooperation an approach to assessment pioneered by K. Patricia Cross and Thomas A. Angelo Advantages: • Takes place at ground zero of learning process for: maximum relevance, usefulness minimum slippage Offers maximum privacy, minimum risk, anxiety Is conducted continuously, has formative benefit • Can provide feedback on both what students know and can do and how they got there, what helps or hinders Motivates students to become more active, reflective learners Can also be used by faculty collectively for the bigger picture Is faculty-friendly, respectful of privacy, autonomy Offers significant resources (e.g., T. Angelo and K. P. Cross, Classroom Assessment Techniques, 1992) and support networks, especially for community college educators Disadvantages: • Is unstructured, highly dependent on individuals' cooperation for administration of CATs (classroom assessment techniques) reporting of results Presents challenge of generalizing to program or institution level Solutions/responses: Provide consistent, careful leadership, oversight Get buy-in from faculty, others Start with agreement on shared outcomes, goals Provide training Make assessment a campus-wide conversation Emphasize the potential for truly useful information for improvement

24 Direct: Student Self-Assessment
The student demonstrates the ability to accurately self-assess a piece of work or performance, usually in relation to one or more outcomes and a set of criteria, e.g. rubrics Ownership of one’s learning Requires clear expectations Can cause anxiety Advantages: The ultimate in active learning, engagement, ownership of one's learning Highly adaptable Extremely educational for students Promotes internalization of intellectual, personal, professional standards Is an essential component of ongoing professional, personal development Is an essential component of life-long learning Faculty can aggregate individual results to identify general findings, trends Disadvantages: Challenging, especially at outset, for both students and faculty Requires clear outcomes, criteria (e.g., rubrics), expectations for level of proficiency Requires student to assess with candor, not spin May cause anxiety, avoidance Long-standing habits, personality traits may need to be overcome (e.g., self-consciousness, excessive modesty, unrealistically high self-appraisal) Requires tact and true coaching attitude from instructor, ability to critique the work or performance, not the person Requires careful management of others who may be present Solutions/responses: Experienced instructors guide, mentor novice instructors Students receive orientation, training Outcomes, criteria, expectations are clear, widely distributed and understood Examples of self-assessment are available Process is presented as primarily developmental, formative Examples of progress over extended time provide encouragement Self-assessment is risk-free

25 Direct: Local Tests Tests designed in relation to the specific course, program, or institution's curriculum and learning outcomes. Can be cumulative (e.g. comprehensives in the major) or less encompassing but still cross-cutting. Format may vary; need not be multiple choice. Can have content validity Can be creative Risk of focusing on surface, not deep learning These are tests made by the institution or the individual instructor, as opposed to national, normed tests. Advantages: Tests are traditional, widely accepted academic practice Testing across courses or programs requires active faculty participation Can stimulate discussion about alignment of goals, curriculum, pedagogy, etc. Can be designed to have content validity Can adapt readily to institutional changes in curriculum, outcomes Can be open-ended, integrative, highly creative in format Can provide good quality of student effort if course-embedded Provide directly relevant, useful information Forestall comparison with other institutions Disadvantages: Run risk of focusing more on surface than deep learning Provide no norms for reference May contain ambiguous, poorly constructed items May offer questionable reliability and validity May be expensive if test construction is contracted out Will not elicit good quality of student effort if seen as add-on Will create misunderstanding of assessment if seen as a threat May become a missed opportunity to use more innovative approaches May invite finger-pointing Solutions/responses: If norms, benchmarks are important, supplement with purchased test Use on-campus expertise Be careful, pilot any test before large-scale administration Provide a "gripe sheet" Accept that assessment is ultimately human judgment, not psychometric science Keep the focus on useful information & improvement, not test scores per se Depersonalize issues, avoid finger-pointing

26 Direct: Commercial, Standardized Tests
Examples include CPAt, GRE, achievement tests, etc. Normed exams Require little labor Prepare students for licensure Poor content validity They test “test-taking” ability Reinforce negative attitudes and behaviors: memorizing rather than understanding, student as empty vessel, etc. Aka Published Tests Advantages: Are a traditional, widely recognized & accepted means of assessment Require little on-campus time or labor Prepare students for licensure, other high-stakes testing Are norm-referenced Offer longitudinal data, benchmarks Are technically high-quality May reflect recent, important trends in the field (e.g., ETS Major Field Tests) Can be useful as part of a multiple-method approach Disadvantages: May offer poor content validity Generally do not provide criterion-referenced scores Test students' ability to recognize "right" answers Reflect students' test-taking ability Often elicit poor quality of student effort, particularly as add-on Reinforce faculty bias toward "empty vessel" theory of education Reinforce student bias toward education as memorizing, regurgitating "right" answers (i.e. "surface" rather than "deep" learning) Reinforce everybody's bias toward assessment as testing Carry risk of misuse of scores, invidious comparisons Provide little insight into students' problem-solving & thinking skills or ability to discriminate among "good" and "better" answers Offer no opportunity for test takers to construct their own answers verbally, numerically, graphically, or in other ways Give students no opportunity to demonstrate important affective traits, e.g., persistence, meticulousness, creativity, open-mindedness. Are less likely than local methods to stimulate productive discussion Are more likely to elicit finger-pointing, anxiety, resistance Can be very expensive ($10-$30/student, plus administration costs) Generally do not provide good value (i.e., useful information for cost) Solutions/responses: Test samples of students, use matrix sampling Negotiate with test maker Supplement with other methods Use with caution

27 Direct: Competence Interviews
Examples might be oral exam or defense of a thesis Probe for extent of student learning Probably best when combined with other techniques May involve authentic assessment such as simulated interactions with clients Care must be taken when designing assessment to develop protocols, rubrics, etc. Advantages • Can provide direct evidence of student mastery of learning objectives • The interview format allows faculty to probe for the breadth and extent of student learning • Can be combined with other techniques that more effectively assess knowledge of facts and terms • Can involve authentic assessment, such as simulated interactions with clients • Can provide for direct assessment of some student skills, such as oral communication, critical thinking, and problem-solving skills Disadvantages • Requires time to develop, coordinate, schedule, and implement • Interview protocols must be carefully developed • Subjective judgments must be guided by agreed upon criteria • Interviewer training takes time • Interviewing using unstructured interviews requires expertise • Not an efficient way to assess knowledge of specific facts and terms • Some students may be intimidated by the process, reducing their ability to demonstrate their learning

28 How to Choose a Method Need to effectively assess the unit or course objectives Should be aligned with the aims of the program Can include development of disciplinary skills May support development of vocational competencies What has the student learned? Assess learning and the program’s effectiveness Adapted by Lee Dunn from: Morgan, Chris (1999) Southern Cross University, New South Wales, Australia. (Unpublished material for Southern Cross University booklet 'Assessing Students') 2011 Oxford Brookes University, Wheatley Campus, Wheatley, Oxford

29 Critical Thinking Developing arguments, reflecting, evaluating, etc.
Examples of Assessment Activities Essay or Report Journal Book review Comment on a theoretical perspective Essay Report Journal Letter of Advice to .... (about policy, public health matters .....) Present a case for an interest group Prepare a committee briefing paper for a specific meeting Book review (or article) for a particular journal Write a newspaper article for a foreign newspaper Comment on an article's theoretical perspective

30 Information Management & Technical Literacy
Accessing, Managing Information Researching, investigating, interpreting and organizing information, collecting data, etc. Examples of Assessment Activities Project Dissertation Applied task or problem Annotated bibliography Accessing and managing information (Researching, investigating, interpreting, organizing information, reviewing and paraphrasing information, collecting data, searching and managing information sources, observing and interpreting) Annotated bibliography Project Dissertation Applied task Applied problem

31 Personal & Professional Development
Managing and Developing Oneself Working cooperatively or independently, managing time, being self-directed, etc. Examples of Assessment Activities Journals Portfolios Learning contracts Group work Managing and developing oneself (Working co-operatively, working independently, learning independently, being self-directed, managing time, managing tasks, organizing) Journal Portfolio Learning Contract Group work

32 Communication Verbal, nonverbal or written communication, one way, two way or group communication, etc. Examples of Assessment Activities Oral presentation Group work Discussion, debate or role play Observation of professional practice Videotaped presentation Communicating (One and two-way communication; communication within a group, verbal, written and non-verbal communication. Arguing, describing, advocating, interviewing, negotiating, presenting; using specific written forms) Written presentation (essay, report, reflective paper etc.) Oral presentation Group work Discussion/debate/role play Participate in a 'Court of Enquiry' Presentation to camera Observation of real or simulated professional practice

33 Respect and Responsibility
Following established routines/rules, working productively with colleagues Examples of Assessment Activities Role plays Group work This category overlaps with others.

34 Problem-Solving; Developing Plans
Posing problems, defining problems, analyzing data, designing experiments, planning, etc. Examples of Assessment Activities Problem scenario Group work Analyze a case Work-based problem Solving problems and developing plans (Identifying problems, posing problems, defining problems, analyzing data, reviewing, designing experiments, planning, applying information) Problem scenario Group Work Work-based problem Prepare a committee of enquiry report Draft a research bid to a realistic brief Analyze a case Conference paper (or notes for a conference paper plus annotated bibliography)

35 Performing Procedures and Demonstrating Techniques
Computation, using equipment, following procedures, etc. Examples of Assessment Activities Demonstration Role play Make a video Create a poster Performing procedures and demonstrating techniques (Computation, taking readings, using equipment, following laboratory procedures, following protocols, carrying out instructions) Demonstration Role Play Make a video (write script and produce/make a video) Produce a poster Lab report Prepare an illustrated manual on using the equipment, for a particular audience Observation of real or simulated professional practice

36 Demonstrating Knowledge, Understanding
Recalling, describing, reporting, relating, etc. Examples of Assessment Activities Written or oral exam Essay or report Comment on the accuracy of records Write an answer to a client’s question Write an encyclopedia entry Demonstrating knowledge and understanding (Recalling, describing, reporting, recounting, recognizing, identifying, relating & interrelating) Written examination Oral examination Essay Report Comment on the accuracy of a set of records Devise an encyclopedia entry Produce an A - Z of ... Write an answer to a client's question Short answer questions: True/False/ Multiple Choice Questions (paper-based or computer-aided-assessment)

37 Designing, Creating, Performing
Imagining, visualizing, designing, producing, creating, performing, etc. Examples of Assessment Activities Portfolio Performance or demonstration Presentation projects Designing, creating, performing (Imagining, visualizing, designing, producing, creating, innovating, performing) Portfolio Performance Presentation Hypothetical Projects

38 Variety of Assessments
Not everyone shines with the “tried and true” Selecting a variety of methods (aligned with outcomes) allows more students to show what they know or can do These are just examples of possible assessments. It is possible to assess every student accurately and fairly using more than one method of assessment.


Download ppt "Probably more than you ever wanted to know"

Similar presentations


Ads by Google