/AnMtgsAbsts2009.55681 A Survey of the Assessment Landscape: What Methods Cultivate Student Learning?.

Tuesday, November 3, 2009: 10:00 AM
Convention Center, Room 337-338, Third Floor

Douglas Eder, Emeritus, Assessment and Biology, Southern Illinois Univ., Edwardsville, Edwardsville, IL
Abstract:
Twenty five years of assessment as public policy in the US have produced a virtual landscape of varied assessment methods. The earliest standardized tests have been joined by comprehensive exams, capstone devices, embedded measures, midpoint methods, assessment days, simulations, portfolios, and too many more to mention. To this incomplete list of direct measures can be added the indirect measures of surveys, questionnaires, focus groups, interviews, reflective essays, and more. Given this array of devices, some people ask, “Which of these effectively monitor student learning…really?” Other people ask, “Which of these not only monitor student learning but actually assist them to learn?” And still other people ask, “Which of these won’t add to my budget and workload by distracting me from the many important things I have to do?” Finally, one also hears this: “Why after twenty five years of assessment haven’t we seen the promised improvements in student learning? Because assessment is obviously a bureaucratic waste of time, why should I involve myself in any of it?”
    The purpose of this interactive presentation is to explore the landscape of assessment devices ---including those mentioned above--- and, furthermore, to highlight the mechanisms whereby assessment can and does improve student learning. Chief among these mechanisms is the application of appropriate feedback to “close the loop.”
    In outline form, this session contains four sections:
1.   Assessment history: Why now and why us?
2.   A review of assessment’s progress and annoyance
3.   Widespread use of assessment tools but not of results
4.   A peek into our future
    (1) The first topic briefly summarizes the changing politics and economics of the 1980s as foundational for the emergence of assessment as public policy. (2) The second topic reviews some substantial gains in student learning and the simultaneous rejection of assessment by many faculty members as just another passing fad. The result of this rejection was widespread ignorance of positive results and equally widespread superficiality in adopting assessment. Consequently, for the past twenty five years, gains in student learning have occurred only in small pockets of higher education. Therefore, most college and university faculty members have not experienced for themselves the increases in achievement and the savings in time and money that assessment can produce. (3) The third section summarizes a host of assessment methods, including standardized tests, and suggests teaching and learning environments where each method has been used effectively. The review also includes individual strengths and weaknesses of each method. (4) Whereas the first three sections will be dealt with as a coherent whole, the fourth section diverges and suggests where the power of assessment really lies: The application of timely and relevant feedback. Regardless of the assessment method chosen, using the method --even skillfully-- is not the goal of assessment. Neither is producing assessment results. Rather, the goal of assessment is to improve student learning by applying as feedback the results of doing assessment. It is this step that has been largely missed by users of all the methods employed. Thus, this session will end by emphasizing the importance, mechanisms, and improvements that accompany effective application of feedback.