%0 Book Section %B Handbook of Natural Language Processing and Machine TranslationHandbook of Natural Language Processing and Machine Translation %D 2011 %T Machine Translation Evaluation and Optimization %A Dorr, Bonnie J %A Olive,Joseph %A McCary,John %A Christianson,Caitlin %E Olive,Joseph %E Christianson,Caitlin %E McCary,John %X The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems. %B Handbook of Natural Language Processing and Machine TranslationHandbook of Natural Language Processing and Machine Translation %I Springer New York %P 745 - 843 %8 2011/// %@ 978-1-4419-7713-7 %G eng %U http://dx.doi.org/10.1007/978-1-4419-7713-7_5