Vol. 9(1) July 2020
Edited by Joss Moorkens, Dorothy Kenny, and Félix do Carmo
Since the well-publicised advent of neural MT, many more language service providers have begun to offer raw and post-edited MT as a reduced-cost option among their suite of products (Lommel and DePalma 2016). The level of automation in translation is usually related to the perishability of the text, along with considerations of regulatory compliance and risk, but new use cases are regularly appearing for NMT where automation might previously have been considered unwise (Moorkens 2017, Way 2018).
Meanwhile, research on MT has tended to focus on building systems to maximise the quality of output, evaluating that output in a cost-effective way, along with various forms of pre- and post-processing of texts. There has been little focus on the sort of workflows that these MT systems would be built into outside of experimental conditions, and where these workflows have been considered, the focus has been on efficiency and utility (Plitt and Masselot 2010, O’Brien 2011).
Likewise, the origin and ownership of training data have received scant attention. At present, claims and counterclaims for copyright of translations all have legal merit without having been tested, yet they are largely ignored within the translation industry (Troussel and Debussche 2014). These conflicting claims could have an anticommons effect, in which there are so many competing claims on a resource that it becomes impossible to use or exploit it. Work created by a machine does not currently qualify for copyright, meaning that the copyright – and liability – lies with the operator. This risk is rarely considered in MT use. When repurposing and retasking human translations and translation fragments, the industry is also avoiding a discussion on the ethical dimensions of data management, including consent for secondary use, copyright management, and data ownership – issues that affect not just vendors but also clients.
And where the original motivation for MT was utopian, the main driver is now the pressure to reduce human costs. If translation is reduced to a series of “language-replacement exercises” (Pym 2003) to be carried out at speed by freelance workers while their productivity rate is quantified within a translation tool, there is a real risk that talent will be discouraged (Abdallah 2014). How do we train students to enter such an industry – or should we even do so? And does the very existence of machine translation undermine efforts to train translators or – more broadly – to educate language learners, in the first place?
At this point, we think it worth looking at the ethics of MT use in industry and the economic and social effects on all stakeholders.
With these issues in mind, we would like to invite submissions that respond to the following and related questions:
- What would an ethical MT supply chain look like?
- How can translation data be used efficiently, but in a way that respects the rights of all agents in the supply chain?
- How has our approach to risk evolved in the context of machine translation?
- What role is played by technology in supporting the business models that are reshaping this chain?
- What real effect do mergers and acquisitions create on the sustainability of translation as an industry and for the people that live in it?
- How can we guarantee the safety of our products for consumers, while maximising the social quality (Abdallah 2014) of all workers in the industry?
- How can we continue to attract and retain human talent in the translation industry?
- What can academics and translator trainers do to make a positive impact on the use of automation in the translation industry?
Instructions for contributors
Articles should be no more than 8,000 words long and should follow the journal’s house style. Full instructions for authors can be found on the journal website. Articles are to be submitted via Editorial Manager, choosing the option for this special issue.
Please send any enquiries to joss.moorkens@dcu.ie with the subject line ‘Translation Spaces’.
Schedule
- October 15th 2019 – deadline submission of full articles for peer review
- December 18th 2019 – feedback from peer-review to authors
- January 20th 2020 – deadline for submission of authors’ revised articles
- January 24th 2020 – feedback from guest editors on revised articles
- January 29th 2020 – deadline for submission of final version
- March 25th 2020 – proofs sent to authors
- July 2020 – publication
References
Abdallah, K. (2014). Social Quality: Key to Collective Problem Solving in Translation Production Networks, in G. Ločmele and A. Veisbergs (eds) Translation, Quality, Costs. Riga: University of Latvia Press, 5–18.
Lommel, A., DePalma, D. A. (2016). Europe’s Leading Role in Machine Translation: How Europe Is Driving the Shift to MT. Boston: Common Sense Advisory.
Moorkens, J. (2017). Under pressure: translation in times of austerity, Perspectives, 25:3, 464-477
O’Brien, S. (2011). Towards predicting post-editing productivity. Machine Translation 25, 197.
Plitt, M. and Masselot, F. (2010). A productivity test of statistical machine translation post-editing in a typical localisation context. Prague Bulletin of Mathematical Linguistics
Pym, A. (2003). Translational ethics and electronic technologies. Paper presented at the VI Seminário de Tradução Científica e Técnica em Língua Portuguesa A Profissionalização do Tradutor.
Way, A. (2018). Quality Expectations of Machine Translation, in J. Moorkens, S. Castilho, F. Gaspari and S. Doherty (eds) Translation Quality Assessment, Cham: Springer, 159–178.