Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring

Authored by: Susan M. Lottridge , E. Matthew Schulz , Howard C. Mitzel

Handbook of Automated Essay Evaluation

Print publication date:  April  2013
Online publication date:  July  2013

Print ISBN: 9781848729957
eBook ISBN: 9780203122761
Adobe ISBN: 9781136334801

10.4324/9780203122761.ch14

 Download Chapter

 

Abstract

With expediencies such as rapid turnaround time and economic benefits associated with reduced scoring costs, automated scoring is widely viewed as on a path to replace human labor, like so many prior innovative technologies. As a result, the focus of automated scoring development has tended to be on validity or measures of accuracy as compared to human efforts. This chapter is focused on a somewhat nearer-term process, the transition from human to machine scoring and how the two scoring methods can be used concurrently. Previously, the authors have referred to the use of both human and machine scoring for a single assessment program as a “blended” scoring model (Lottridge, Mitzel, & Chou, 2009). The central idea is to see if two imperfect scoring methods can be mutually leveraged in some optimal way to improve the overall accuracy of scoring. Although the authors have not yet achieved that goal, the studies presented here can be taken as a progress report toward that objective.

 Cite
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.