BDÜ Conference in Bonn – MT Still at Odds with Terminology

1436
BDU Conference feature

From 22th to 24th November, Germany’s largest translators and interpreters association BDÜ staged a three-day conference in Bonn, in the former premises of the Bundestag. The plenary building has become part of a huge World Conference Centre, after the German federal parliament moved to Berlin. The United Nations Climate Change Conference was also held here two years ago.

More than a thousand professionals from 25 countries and about 150 speakers participated in this third international BDÜ conference named “Translating and Interpreting 4.0 – New Ways in the Digital Age”. Its topics ranged from terminology management to translator training, but the leitmotif of almost all lectures, key notes, workshops and panel discussions was (neural) machine translation in its combination with artificial intelligence and the question how this “disruptive” technological change will affect the whole industry or may even turn it inside out. While at similar conferences a few years ago MT was still the elephant in the room, this time, it was the all-encompassing topic from which new catch phrases were emanating, such as “BLEU score” (used for comparing the performance of different MT engines) or “gisting” (using MT without any post-editing process just to get a rough idea of what a text is about). Inevitably, a queasy unease about the future shape of the profession was evident in many a conference participant in casual conversations during the breaks.

BDU Conference posterThe conference provided a perfect opportunity to cross-check the puffery and half-truths of suppliers and developers bedazzled by their own products with the accounts of hands-on experience gained by actual users of MT technology in real-life conditions. Laymen (even those not monolingually challenged) have always had great difficulties understanding the translation process and the strategies involved. This means, translators have had a hard time explaining to outsiders the qualitative aspects of a good (and a bad) human translation. And now it has become even more difficult for them to answer questions about the usefulness of machine translation. They will have to put even greater effort into outlining the pros and con, the opportunities and the risks of MT tools to their clients or other stakeholders.

One of the points frequently raised was the issue of data protection and information security when using MT, but most speakers did not dare to even give a halfway satisfying answer, avoiding the highly complex legal implications. Anonymising the source text by striking out any names is rather cumbersome and is often insufficient in any case. Equally important are liability issues when medical, pharmaceutical or other texts with life-and-death implications are pre-translated by machines.

Many presentations dealt with different ways how MT performance can (and actually must) be improved by post-editing. Post-editing of MT results already appears to be an well-established new task or even job profile, with its own new challenges, shortcomings and pitfalls such as monotonous work routines and a greater risk of overlooking mistakes in seemingly perfect target language sentences (that may have vital expressions missing because the algorithm could not cope with them or even contain unexplainable additions called “hallucinations”). To be a good post-editor one must first know how human and machine translation differ fundamentally in their approaches. Therefore, universities may have to consider going back to initial training courses and exams for future translators (and post-editors) without recourse to dictionaries or any kind of translation tools whatsoever in order to make them grasp the nuts and bolts of the trade before letting them use (in a second stage) everything technology has to offer.

BDU presentationTerminology is still the missing link when it comes to further advances in MT technology. Pre-editing – another new buzzword often encountered at the conference – is no real remedy, as it entails nothing more than brushing up the formal features of a source text. It may be useful in large translation projects involving many languages as long as prior consultation of the customer who produced the original text is possible and feasible. So far, MT engines process every sentence separately without considering the coherence of the whole text. However, even experiments in which additional contextual information was provided in the source text in order to help the machine with the disambiguation of key terms proved to be totally ineffective. Another more promising approach may be “engine training” or “engine customisation” by feeding existing translation memory content to the MT engine. Not all engines offer this option, though, and the time-consuming preparatory process only pays off in large-scale projects. Solutions based on a direct link-up between MT engines and terminology databases are not on the horizon yet.

“Close, but no cigar” as one speaker summed it up in the title of her presentation.


Martin DlugoschWritten by Martin Dlugosch

Currently a Rotating Terminologist at the Terminology Coordination Unit of the European Parliament in Luxembourg. Holds an MA in Translation Studies and in International Marketing from Mainz University and Reutlingen University. Since 2009, translator in European Parliament’s German Translation Unit.