An Analytic Approach to English Language Instructors' Scoring Differences of Writing Exams

View/Open

Date

Author

xmlui.mirage2.itemSummaryView.MetaData

Abstract

This study investigated scoring results of writing exams assigned by raters so as to see how consistent and reliable scores can be acquired focusing on raters’ individual scoring results and different raters’ scoring results for similar written performances. Writing assessment is away from clear-cut answers, consequently, ensuring objectivity in scores has always been longed for. Various studies have been conducted to achieve this. The current study primarily aimed at discovering all factors affecting the scoring process in order to avoid variance in scoring results. For this, in a state university in Ankara, using 3 techniques- questionnaire, think aloud and interview-, data was collected and the research was conducted with 15 ELT instructors teaching at the School of Foreign Languages chosen by convenience sampling and 25 ELT instructors participating in the questionnaire. A mixed method research design was employed. For the quantitative findings, SPSS Kruskal Wallis and Mann-Whitney U tests were used and insignificant results were gathered, whereas quantitative results provided by percentages were quite significant. As for the qualitative findings, the analysis clearly illustrated that there are factors mainly like rater effects, rubric use, scoring styles, prioritized and ignored criteria, experience in teaching, comparing performances, failing to adapt to the level to be considered, and institutional goals causing both inter-rater unreliability and intra-rater unreliability. To obtain consistent results, using rubrics with well-defined criteria and categories, consultation and feedback, standardization meetings, frequent workshops can be pursued in addition to benchmarking and multiple scoring and future ELT instructors can be guided accordingly.