Date of Award
6-2013
Culminating Project Type
Thesis
Degree Name
English: Teaching English as a Second Language: M.A.
Department
English
College
College of Liberal Arts
First Advisor
Choonkyong Kim
Second Advisor
John Madden
Third Advisor
Marc Markell
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Keywords and Subject Headings
ESL, Summary Evaluation, Rubric, Language Use, Mechanics, Grammar
Abstract
Literature reviewed in this research suggests that sufficient effort has not been devoted to diverse issues relating to the enhancement of explicit tools and rating procedures for evaluating ESL writing in general, and the academic summary writing in particular. In fact, only one study on assessment methodologies was found to have focused on factors that could be involved in a summary evaluation decision. The purpose of this study was to examine human raters' overall summary assessment processes, to see what language errors they look at when evaluating and making scoring decisions on students' written summaries within the context of an ESL program at a university in the Midwest. Seventy summary samples (N=70) randomly coded from the college ESL (CollESL) program database were read and error type analyses were done based on two analytic rubric components: Language Use and Mechanics. Data collected was analyzed using SPSS.
The results revealed significant correlations: (a) between human raters' overall summary score (SumScore) and language use, (b) between SumScore and mechanics, ( c) between language use and mechanics, and ( d) between SumScore and two external readability measures that were also used in the data analyses. The researcher concluded that the overall SumScore did correlate highly with language use and mechanics on the one hand, and with the two external readability scales on the other. However, no particular error type seemed to explain the language score variability-meaning that human raters might have looked at not just one, but multiple parameters not included in this research. This s1Udy suggests that scorers maintain knowledge of evaluation rubrics, which should be explicitly developed for ESL academic summary, with the goal of making the rating process more reliable, investigative, and informative across a variety of areas and to diverse stakeholders.
Recommended Citation
Adiang, Catherine A., "Two Components of the ESL Summary Evaluation Rubric: Language use and Mechanics" (2013). Culminating Projects in TESL. 55.
https://repository.stcloudstate.edu/tesl_etds/55
Comments/Acknowledgements
I am particularly grateful to my professor, Dr. Choonkyong Kim, whose suggestions, guidance, and encouragement were instrumental in completing this Master's work. I wish to express my thankfulness to Dr. Madden for his advice and meticulous reminders that kept me on track during my research, not forgetting Dr. Markell for his willingness to read and provide feedback on the study. I am also very appreciative of Dr. Robinson, who believed in me even more than I trusted myself, and helped me enroll in the TESL MA program at St. Cloud State University (SCSU).
I would like to extend my heartfelt gratefulness to my family and friends, both here and back in Cameroon, for their moral and emotional support during my years of hard work at SCSU. Close to me here, I am very thankful to my husband, N. F. Alfred, for bringing me to the consciousness that no hurdles are ever high enough to stand in the way of realizing a dream.
Most importantly, my special appreciation goes to my beautiful and smart team-children, Ange Vianney (computer expert), Divy-Tresor (emotional counselor), and Laure-Briana (star-general manager) who embarked on this journey with me from day one and whose remarkable understanding and patience beyond their ages have been a constant push to success throughout my studies at St. Cloud State University.