HOW FAR DO WE AGREE ON THE QUALITY OF TRANSLATION?

Vol.1, Issue 1, 2015, pp.18-31 Full text

Author: Maria Kunilovskaya

Affiliation: Tyumen State University, Tyumen, Russia

Abstract
The article aims to describe the inter-rater reliability of translation quality assessment (TQA) in translator training, calculated as a measure of raters’ agreement either on the number of points awarded to each translation under a holistic rating scale or the types and number of translation mistakes marked by raters in the same translations. We analyze three different samples of student translations assessed by several different panels of raters who used different methods of assessment and draw conclusions about statistical reliability of real-life TQA results in general and objective trends in this essentially subjective activity in particular. We also try to define the more objective data as regards error-analysis based TQA and suggest an approach to rank error-marked translations which can be used for subsequent relative grading in translator training.

Keywords: TQA, translation mistakes, inter-rater reliability, error-based evaluation, error-annotated corpus, RusLTC

Article history:
Received: 10 April 2014
Accepted: 21 December 2014
Published: 1 February 2015

Citation (APA6):
Kunilovskaya, M. (2015). How far do we agree on the quality of translation. English Studies at NBU, 1(1), 18-31. Retrieved from http://esnbu.org/data/files/2015/2015-1-2-kunilovskaya-pp18-31.pdf

Copyright © 2015 Maria Kunilovskaya


This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0), which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited. If you want to use the work commercially, you must first get the author's permission.

References:

Artstein, R. & Poesio, M. (2008). Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, 34(4), 555–596. doi: 10.1162/coli.07-034-R2
View Article

Freelon, D. G. (2010). ReCal: Intercoder Reliability Calculation as a Web Service. International Journal of Internet Science, 5(1), 20–33.
View Article

Kelly, D. (2005). A Handbook for Translator Trainers. A Guide to Reflective Practice. Manchester: St. Jerome Publishing.
Google Scholar

Knyazheva, E & Pirko, E. (2013). Otsenka kachestva perevoda v rusle metodologii sistemnogo analiza [`TQA and Systems Analysis Methodology`]. Journal of Voronezh State University. Linguistics and Intercultural Communication Series, 1, 145-151.
View article

Krippendorff, K. (2004). Content Analysis: An Introduction to Its Methodology. Sage Publications.
Google Scholar

Krippendorff, K. (2011). Computing Krippendorff's Alpha-Reliability. Retrieved from http://repository.upenn.edu/asc_papers/43/
View Article

Neubert, A. (2000). Competence in Language, in Languages, and in Translation. In Schäffner, C. & Adab, B. (Eds.). Developing Translation Competence. Amsterdam/Philadelphia: John Benjamins Publishing Company (pp. 3–17). doi: 10.1075/btl.38
Google Scholar

Strijbos, J.-W. & Stahl, G. (2007). Methodological Issues in Developing a Multidimensional Coding Procedure for Small-group Chat Communication. Learning and Instruction, 17(4), 394-404. doi: 10.1016/j.learninstruc.2007.03.005
View Article

Waddington, Ch. (2001) Should Translations be Assessed Holistically or through error
analysis?. Hermes, 26, 15-37. Retrieved from http://download2.hermes.asb.dk/archive/download/H26_03.pdf
View Article

Williams, M. (2009). Translation Quality Assessment. Mutatis Mutandis, 2(1), 3–23.
View Article

Zwilling, M. (2009). O kriteriiakh otsenki perevoda ['On Translation Quality Assessment Criteria']. In Zwilling, M. (Ed.), O perevode i perevodtchikakh [On Translation and Translators] (pp. 56–63). Мoskva: Vostotchnaia kniga.