The latest big leap forward in machine translation: adaptive MT

June 15, 2017 12:08 pm

Technological developments in the field of machine translation are taking place very fast and as a result computer-aided translation tools continue reinventing themselves with a view to make the translation process more time-efficient and, hence, better supporting translators’ work. In this post, Heriot-Watt University lecturer and freelance translator Ramón Inglada presents one of the recent innovations in this field that is being followed with increasing curiosity by both the industry and the academia: adaptive machine translation.

The latest big leap forward_banner

Over the last 18 months, the topic of neural machine translation (NMT) has attracted high levels of interest in the area of translation technology. Research output over this period seems to further confirm this trend.

While NMT has undoubtedly a lot of potential and many interesting applications, there has been another relatively recent development in the area of MT that has also attracted a fair amount of attention: adaptive MT. This new technology claims to allow a MT system to learn from corrections on the fly. If you have worked in post-editing, chances are that the idea of not having to enter the same changes time and time again will sound like music to your ears.

Two companies are at the forefront of this technology and in the race of its development. The first one is Lilt, a Silicon Valley start-up founded in 2015, although their work in adaptive MT started at Stanford back in 2011. The second one is SDL, which promoted adaptive MT as one of the biggest innovations included in the latest version of Trados Studio.

What are your thoughts on the matter? Do you have any experience with NMT or adaptive MT? In your opinion, which one of these technologies seems to be more promising? Which one do you think will be more useful for professional translators? Is a combination of both the way forward? Do you have any concerns about confidentiality, especially in terms of the corrections entered by post-editors? We invite you to navigate through the sources listed below to learn more about the current state of the art of adaptive MT!

Ramon Inglada_Profile Pic

Written by Ramón Inglada, Translator and Lecturer at Heriot-Watt University (Scotland). Ramón Inglada has been a professional translator since 2002. He has also been working as a university lecturer since 2012, teaching a wide range of courses, including English into Spanish Translation, Translation Technologies and Software Localisation.

Post prepared by Doris Fernandes del Pozo – Journalist, Translator-Interpreter and Communication Trainee at the Terminology Coordination Unit of the European Parliament. She is pursuing a PhD as part of the Communication and Contemporary Information Programme of the University of Santiago de Compostela (Spain)

Disclaimer_External contributions to blog


  • Ferguson, Neil (2016) “AdaptiveMT for SDL Trados Studio 2017: a self-learning machine translation engine”, SDL Trados. Available at: (Accessed 15 June 2017)
  • Fernández Rouda, Juan Martín (2016) “Adaptive Machine Translation in a Nutshell”, Linkedin. Available at: (Accessed 15 June 2017)
  • Lilt (n.d.) Lilt Labs. Available at: (Accessed 15 June 2017)
  • Slator (2017) Neural Machine Translation. Available at: (Accessed 15 June 2017)
  • Estopace, Eden (2017) “Neural Machine Translation Research Accelerates Dramatically”, Slator. Available at: (Accessed 06 June 2017)
  • Marking, Marion (2017) “SDL Sues Lilt For Patent Infringement”, Slator. Available at: (Accessed 06 June 2017)
  • Gaelle Bou (2016) “How does Neural Machine Translation work?”, Systran Blog. Available at: (Accessed 06 June 2017)
  • TranslationZone (n.d.) “AdaptiveMT – next generation machine translation”. Available at (Accessed 06 June 2017)
  • Trusted Translations, “What is post-editing?” Available at: (Accessed 06 June 2017)

1,994 total views, 46 views today

Categorised in: , , ,

  • Fazakas János

    Certainly an interesting development. However, I feel most of the problems that nowadays translators are confronted with, is the poor (to say the least) quality of the original texts. Gross spelling errors is the most minor of these problems. Just an example I stumbled into today: in a Hungarian text I saw “bukfenc szárító”. In Hungarian “bukfenc” means “somersault”, so this must have been a “somersault” dryer. I had a hard time imagining: who should somersault? The machine itself of the operator? :) Somersaulting until it dries, right? In the end it turned out that this was an improvised into Hungarian translation of “tumble dryer”. Just imagine if somersault were to be indeed summer sault. ‘Cause with this climate change one never knows, maybe summer will indeed sault, but I feel not just now :)

  • tahoar

    Ramón, There is a third option called “adapted MT.” Although similar to “adaptive” the distinction is significant.

    As your reference links describe, adaptive MT is an automated subsystem that runs in the background to monitor the translator as he/she changed the suggested MT text. It then changes the primary MT system’s performance based on those observations. An adaptive MT subsystem can be applied to almost any original source of MT text (except NMT). So, adaptive MT can be used with Google Translate or a customized MT engine you create in KantanMT, or other primary MT systems.

    The adaptive MT subsystem activates only when the translator changes the suggested MT text. If the suggested text is correct and the translator doesn’t change it, nothing triggers the adaptive MT subsystem. I.e. why adapt if it was already correct? So, an adaptive MT subsystem applied to Google would trigger over 90% of the time.

    On the other hand, “adapted MT” is a specific kind of customized MT engine. It learned a translator’s individual preferences when the translator created it. An adapted MT engine maximizes the number of correct MT suggestions for a specific translator. The adapted engine works best for that individual, thus optimizing that translator’s performance. Any other translator who would use that adapted engine might experience a performance boost, but can not expect the same performance gain.

    Of course, it is possible to add an adaptive MT subsystem to an adapted MT engine. We do this as a human supervised adaptation process called “terminology on-the-fly.” This is because unsupervised learning systems (like those your links reference) sometimes learn the wrong thing. Once they’re learned, it’s very difficult to unlearn them. Just ask Microsoft. Their “conversational understanding” AI bot, Tay, learned to be a Nazi racist in only one day. Microsoft couldn’t “unlearn” the bad things. So, they had to shut down Tay immediately.

    The translator create an adapted engine for himself and supervise the adaptation on-the-fly. This is what professionals do best.

    Regarding NMT systems, a translator’s improved performance with adapted MT significantly exceeds all published improvements from NMT systems. Technical limitations of early NMT algorithms preclude on-the-fly adaptive MT. NMT is a “bleeding edge” technology and researchers might overcome this limit, but only time will tell.

  • manuel herranz

    I’ve been developing SMT for 7 years. Prior to that, it was 10 years of rule-based (as a user). We’ve stopped all SMT development and thrown the lot into neural networks and artificial intelligence. Adaptive is good but the technology has got as far as it can go. The starting point of NMT is so high that any small hybridisation will bring it closer to human. Whilst statistical “chops” as it decodes and requires massive amounts of data, NMT requires about half the training material. I have seen NMT infer declensions and conjugations in Russian and kanji and character combinations in Japanese. The gains in Spanish or French are not “perceived” to be so great as data and hybrids have produced fine-tuned systems and those languages are related. The greater gains come from decoding morphologically rich, unrelated languages or syntactically different language pairs.

    NMT takes the whole sentence into account. Therefore, it sorts out relationships like “his” “her” much better in longer sentences. That level of granular corrections is unachievable by Adaptive, no matter how many times it is corrected. It does not take context into account. Online training (the academic term for Adaptive) has been around for at least a decade. It is a dead as far as research goes – unless we try it now with neural networks….

    My 5 cents to the neural discussion can be found here

  • kvashee

    Here are some more references on Adaptive MT : SDL

    and Lilt

    and a look at how this compares with Neural MT

    The company most likely to get Neural MT connected with Adaptive MT is SDL who has just released some Neural MT products and has the resources to connect these two paradigms together.