Date of Award
Culminating Project Type
English: Teaching English as a Second Language: M.A.
College of Liberal Arts
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Keywords and Subject Headings
ESL, Summary Evaluation, Rubric, Language Use, Mechanics, Grammar
Literature reviewed in this research suggests that sufficient effort has not been devoted to diverse issues relating to the enhancement of explicit tools and rating procedures for evaluating ESL writing in general, and the academic summary writing in particular. In fact, only one study on assessment methodologies was found to have focused on factors that could be involved in a summary evaluation decision. The purpose of this study was to examine human raters' overall summary assessment processes, to see what language errors they look at when evaluating and making scoring decisions on students' written summaries within the context of an ESL program at a university in the Midwest. Seventy summary samples (N=70) randomly coded from the college ESL (CollESL) program database were read and error type analyses were done based on two analytic rubric components: Language Use and Mechanics. Data collected was analyzed using SPSS.
The results revealed significant correlations: (a) between human raters' overall summary score (SumScore) and language use, (b) between SumScore and mechanics, ( c) between language use and mechanics, and ( d) between SumScore and two external readability measures that were also used in the data analyses. The researcher concluded that the overall SumScore did correlate highly with language use and mechanics on the one hand, and with the two external readability scales on the other. However, no particular error type seemed to explain the language score variability-meaning that human raters might have looked at not just one, but multiple parameters not included in this research. This s1Udy suggests that scorers maintain knowledge of evaluation rubrics, which should be explicitly developed for ESL academic summary, with the goal of making the rating process more reliable, investigative, and informative across a variety of areas and to diverse stakeholders.
Adiang, Catherine A., "Two Components of the ESL Summary Evaluation Rubric: Language use and Mechanics" (2013). Culminating Projects in TESL. 55.