Why machine translation should have a role in your life. Really!

Guest author Spence Green talks about a heated topic: Machine Translation, Translation Memories and everything in between. Spence Green is is a co-founder of Lilt, a provider of interactive translation systems. He has a PhD in computer science from Stanford University and a BS in computer engineering from the University of Virginia.

Image: pixabay.com

Image: pixabay.com

It is neither new nor interesting to observe that the mention of machine translation (MT) provokes strong opinions in the language services industry. MT is one scapegoat for ever decreasing per-word rates, especially among independent translators. The choice to accept post-editing work is often cast in moral terms (peruse the ProZ forums sometime…). Even those who deliberately avoid MT can find it suddenly before them when unscrupulous clients hire “proof-readers” for MT output. And maybe you have had one of those annoying conversations with a new acquaintance who, upon learning your profession, says, “Oh! How useful. I use Google Translate all the time!”

But MT is a tool, and one that I think is both misunderstood and underutilized by some translators. It is best understood as generalized translation memory (TM), a technology that most translators find indispensable. This post clarifies the relationship between TM and MT, dispels myths about the two technologies, and discusses a few recent developments in translation automation.

Translation Memory

Translation memory (TM) was first proposed publicly by Peter Arthern, a translator, in 1979. The European Commission had been evaluating rule-based MT, and Arthern argued forcefully that raw MT output was an unsuitable substitute for scratch translations. Nonetheless, there were intriguing possibilities for machine assistance. He observed a high degree of repetition in the EC’s text, so efficiency could be improved if the EC stored “all the texts it produces in [a] system’s memory, together with their translations into however many languages are required.” [1, p.94]. For source segments that had been translated before, high precision translations could be immediately retrieved for human review.

Improvements upon Arthern’s proposal have included subsegment matching, partial matching (“fuzzies”) with variable thresholds, and even generalization over inflections and free variables like pronouns.

But the basic proposal remains the same:

Translation memory is a high-precision system for storing and retrieving previously translated segments.

Machine Translation

Arthern admitted a weakness in his proposal: the TM could not produce output for unseen segments. Therefore, the TM “could very conveniently be supplemented by ‘genuine’ machine translation, perhaps to translate the missing areas in texts retrieved from the text memory” [1, p.95]. Arthern viewed machine translation as a mechanism for increasing recall, i.e., a backoff in the case of “missing areas” in texts.

Think of MT this way:

Machine translation is a high-recall system for translating unseen segments.

Modern MT systems are built on large collections of human translations, so they can of course translate  previously seen segments, too. But for computational reasons they typically only store fragments of each sentence pair, so they often fail to produce exact matches. TM is therefore a special case of MT for repeated text. TM offers high-precision, and general MT fills in to improve recall.

Myths and countermyths

By understanding MT and TM as closely related technologies, each with a specific and useful role in the translation process, you can offer informed responses when you hear the following proclamations:

  • TM is “better than” MT – false. MT is best suited to unseen segments, for which TM often produces no output.
  • Post-editing is MT – false. Both TM and MT produce suggestions for input source segments. Partial TM matches are post-edited just like MT. Errors can be present in TM exact matches, too.
  • MT post-editing leads to lower quality translation – false. The translator is always free to ignore the MT just as he or she can disregard TM partial matches. Any effect on quality is probably due to priming, apathy, and/or other behavioral phenomena.
  • MT is only useful if it is trained on my data – neither true nor false. Statistical MT systems are trained on large collections of human-generated parallel text, i.e., large TMs. If you are translating text that is similar to the MT training data, the output can be surprisingly good. This is the justification for the custom MT offered by SDL, Microsoft, and other vendors.
  • TMs improve with use; MT does not – true until recently. Lilt and CasmaCat (see below) are two recent systems that, like TM, learn from feedback.

Tighter MT Integration

Major desktop-based CAT systems such as Trados and memoQ emphasize TM over MT, which is typically accessible only as a plugin or add-on. This is a sensible default since TM has the twin benefits of high precision and domain relevance. But new CAT environments are incorporating MT more directly as in Arthern’s original proposal.

In the November 2015 issue of the ATA Chronicle I wrote about three research CAT systems based on interactive MT, that is an MT system that responds to and learns from translator feedback. Two of them are now available for production use:

  • CasmaCat – Free, open source, runs locally on Linux or on a Windows virtual machine.
  • Lilt – Free, cloud-based, runs on all major browsers.

The present version of CasmaCat does not include TM, so I’ll briefly describe Lilt, which is based on research by me and others on translator productivity.

Lilt offers the translator an integrated TM / MT environment. TM entries, if present, are always shown before backing off to MT. The MT system is interactive, so it suggests words and full translations as the translator types. Smartphone users will be familiar with this style of predictive typing.

 

Lilt also learns. Recall that both TM and MT are derived from parallel text. In Lilt, each confirmed translation is immediately added to the TM and MT components. The MT system extracts new words and phrases, which can be offered as future suggestions.

Conclusion

New translators should think about how to integrate MT into their workflows as a backoff. Experiment with it in combination with your TM. Measure yourself. In a future post, I’ll offer some tips for working with both conventional and interactive MT systems.

—————-
[1] Peter J. Arthern. 1979. Machine translation and computerized terminology systems: A translator’s viewpoint. In Translating and the Computer, B.M. Snell (ed.).

Advertisements

8 thoughts on “Why machine translation should have a role in your life. Really!

  1. I like the crisp style in Mr. Green’s presentation here. I’m a seasoned translator with considerable experience in translation memory tools and in machine translation. So, I am familiar with the arguments discussed above.

    Two counterarguments, however:

    1) Many translators, technical or not, will be unfamiliar with the word “backoff” (a computer networking term). This discussion will be better served by avoiding jargon.

    2) While Mr. Green sensibly starts his presentation by pointing at how heated the argument among some translators can be whenever machine translation is mentioned, the headline doesn’t help his case but adds fuel to the fire, as if saying “MT will be part of your life, whether you like it or not.”

    As a corollary, there are not two camps in the language services industry as far as translators and MT are concerned. To characterize translators as for or against MT, even for the reasons described above, is to offer a reductive and oversimplistic outlook on the state of the art.

    • Thanks for reading, Mario. I agree that in between pro-MT translators and those who are against it, there are varying degrees of (dis)agreement. However, statistically, when presented with only two options, our brains tend to favor one option over the other, even if we can’t conclusively say we are in favor or against. Personally, I find the MT subject fascinating and while it won’t apply to the creative work I do, I am interested in learning more about it and finding ways of incorporating new technology into my work–when possible. It’s plain Technological Darwinism.

  2. Pingback: Weekly translation favorites (Mar 4-10)

  3. Pingback: Top5 Articles of the Month on Terminology - March 2016 -Terminology Coordination Unit

  4. Pingback: Why machine translation should have a role in your life. Really! | The Savvy Newcomer

  5. Pingback: Global Voices Community Blog » Translator Newsletter: Lists Upon Lists, Translating Activism, & More!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s