Acessibilidade / Reportar erro

BRIDGING THE GAP BETWEEN MACHINE TRANSLATION OUTPUT AND IMAGES IN MULTIMODAL DOCUMENTS

APROXIMANDO RESULTADOS DE TRADUÇÃO AUTOMÁTICA E IMAGENS EM DOCUMENTOS MULTIMODAIS

Abstract

The aim of this article is to report on recent findings concerning the use of Google Translate outputs in multimodal contexts. Development and evaluation of machine translation often focus on verbal mode, but accounts by the area on the exploration of text-image relations in multimodal documents translated automatically are rare. Thus, this work seeks to describe just what are such relations and how to describe them. To do so this investigation explores the problem through an interdisciplinary interface, involving Machine Translation and Multimodality to analyze some examples from the Wikihow website; and then it reports on recent investigation on suitable tools and methods to properly annotate these issues from within a long-term purpose to assemble a corpus. Finally, this article provides a discussion on the findings, including some limitations and perspectives for future research.

Keywords
Multimodality; Machine Translation; Machine Translation Output Classification; Intersemiotic Texture; Intersemiotic Mismatches

Universidade Federal de Santa Catarina Campus da Universidade Federal de Santa Catarina/Centro de Comunicação e Expressão/Prédio B/Sala 301 - Florianópolis - SC - Brazil
E-mail: suporte.cadernostraducao@contato.ufsc.br