Translators should upgrade their strategies to the modern era. I recommend software translations using localization tools as a thriving business. This is a tool that will help you a lot in the process: https://poeditor.com. It has a translation memory that saves up a lot of time.

The opportunity to rescue ancient or rarely spoken languages from oblivion enhances history. It is In the idiomatic expressions of languages that we typically stumble across remnants of traditions or hints about what about valued animals, foods or people. Being able to decipher the linguistic description of an ancient collection of people is like piecing together shards of pottery at an archaeological dig, adding to our understanding of earlier or more exotic cultures. As machine translators improve, they will make the initial coding of unknown or "found" languages faster.

ALL languages have multiple dialects - regional, class-based, ethnic etc. Commonly, only one or two (in the case of English) are used for official & formal communication, & therefore is regarded as the 'correct' form of a language. That is bulltwang, of course, since the language variation used by an individual or community is the 'correct' variation for that individual or community. Linguistic purists often try to ignore the fact that all variations of a particular language influence each other, including the 'official' variation. Will computer translation programmes be successful in keeping up with the ongoing dynamics of language change? Not unless those programmes themselves are capable of learning from the input they receive. And since the overwhelming proportion of such input is & will continue to be made in the 'official' versions of most languages, I rather doubt it.

Duolingo attempts to avoid the pitfalls of machine translation by crowd-sourcing—having actual humans do the translating for free while they're learning the language. That can lead to some weird translations, but averaged over a couple dozen translations of the same thing, the results seem to be pretty good. It's created by the same people who made the Recaptcha test to make sure we're not robots (which apparently also started out as crowd-sourced translating, but of old newspapers that optical character recognition doesn't work on).

Free or commercially available machine translation (MT) software cannot be expected to render with accuracy the meaning of a foreign-language text, and thus MT systems cannot guarantee that their output is suitable for any purpose other than obtaining the gist of a text. The result can be a very poor translation, leading to a negative image for the client and potentially loss of business.

One other aspect of the translation process that MT systems will never fulfill is the human communication between translators and clients: professional translators are trained communication specialists who take pride in their work, are readily available to respond to any concerns and additional requests—even the subtlest ones—from their clients, and are committed to assisting their clients in achieving successful communication outcomes.

Another important point to be made concerns the use of online MT with respect to confidentiality. Confidentiality is a fundamental value in translation, which is compromised by those MT tools that store information online. The information fed into an online MT tool remains stored in the engine and can be accessed and used by others. Thus, access to proprietary information could be in the hands of third parties alien to the duty of confidentiality that binds a translator with a client.

In short, machine translation does not offer the consistently high quality and accountability that a professional translator does, and information obtained by means of machine translation cannot be considered reliable. Therefore, users of publicly and commercially available machine-translation software should be aware of the dangers involved, such as compromised quality and confidentiality, and making any decisions or taking any actions based on machine-translated information shall be at the users' own risk.

International Association of Professional Translators and Interpreters (IAPTI)www.iapti.org

The last lingua franca? I rather think that machine translation would render THE REST of languages useless. Right now, there are a lot of countries (I'd say most of them) where several languages live together, and are helped by a common language, which very often is English, but also French, Spanish, etc. If I live in Nigeria, and I speak Hausa and English, and my children can enjoy a perfect translation both ways, I find it very likeable that they would end up speaking ONLY English, and taking advantage of machine translation when necessary. Same would happen with all Chinese languages, and Mandarin, or all local languages and Spanish in Latin America, local Arabic dialects and Standard Arabic, etc.
We could find that in 2-3 generations the scope of spoken languages would drop down to just the 6-7 biggest languages (English, Mandarin, Spanish, Portuguese, French, Russian, Arabic...), and maybe in 2-3 more we'd only have English.

Until things advance to this level, computerized translation may produce passable, intelligible responses, but anyone with real knowledge of either of the two languages will be able to immediately discern it as machine output.

The AI world has the 'Turing test' where AI has advanced sufficiently that the user cannot discern whether they are talking to a computer or a human; the same test applies to machine translation -- when the reader can not tell the difference between machine-translated text and text written by a native speaker, we will have achieved Ostler's dream.

While Google Translate's statistical method is exciting, and produces fairly capable results IF there are sufficient high-quality dual-language texts in the particular subject area you are working in, the system suffers from a significant flaw. The problem is one of "corpus degradation": what happens, essentially, when a large body of documents increasingly consists of translations of translations of translations. In a sense, Google Translate is a victim of its own success: the crowd-sourcing nature of the approach means that many of the texts being consulted by the statistical engine have been produced by non-native authors using Google Translate to produce the texts in the first place. Thus Google Translate is treating a translated text - that has not been improved by e.g. a native-language copy editor - as a perfectly valid referent, despite the fact that the translated text is actually a very poor translation.

The other issue Google Translate must deal with is complex grammatical structures - not one of the strengths of a statistical system. The inadequacy of the system's approach can be most clearly observed in German > English translation, for example. Google Translate has a very poor grasp of German syntax UNLESS a large body of relatively high-quality translations already exists in the particular subject area you're working in.

Corpus degradation is the major problem, however, because if you're basing your own translation on derivatives of derivatives of texts that were poorly translated in the first place, the degradation curve dives ever more steeply downwards! For this reason I worry that Google Translate may not be able to progress much further; that we've already reached the top of the curve, as it were, and are likely to see accelerating disintegration rather than ongoing improvement - ironically in the more popular, hence more widely written-about, subject areas in particular.

But they do have a bunch of clever engineers, so who knows what they'll come up with next? The fact that such crowd-sourced translation is possible at all is something that constantly amazes me as I look back over nearly 30 years in the translation industry.

Google Translate does a rather bad job even when translating from one European language to another. I've used it a couple of times to translate webpages in non-European languages and invariably the result was near impossible to understand. I have no idea how good and fast progress will be, but as fans of the service already state that the thing is near-perfect, I'm not holding my breath.

The mere thought of Google Translate becoming so accurate as to perfectly seize (and render) the meaning encrypted in quirky twist(er)s of baffling languages (some of them downright lovable), casts a dim shadow (yet a shadow still) on my efforts as I struggle with learning a tricky language. I might just as well go IT and come up with a C-3PO-like software able to do the work of a million linguists per second, instead of patiently counting letters and admiring their design, for who knows, great things come to life when the human brain is annoyed with its limitations. Easier said than done, 'cause what do I know about IT? Ergo, I might be better off learning a couple of words more, instead of daydreaming in binaries, for the simple reason that turning "meaningless" sounds into words that eventually (begin to) ring familiar is an amazing feeling. Even if it takes years. No software could ever "activate" that feeling. Unless we've all turned into robots greedily ingesting terabytes of artificial knowledge. And even then, I predict there'll still be "alternative" natural foreign language acquisition. Because it makes (some) people happy. And as far as I know, happiness is a temptation difficult to resist:)

Machine translation will never replace human translation. Machines have no cultural background. Try and translate the phrase "She has other fish to fry." into any other language. A literal translation will wholly miss the point.

There is a major issue with GT that nobody seems to have addressed. When you use it, your source text is added to their database, becomes Google's property and is thus made public. How do companies feel about having their confidential reports and papers published on the internet? Until GT can provide the same level of confidentiality as a human signing a non-disclosure agreement we have some way to go before we are replaced. For translators like myself in the Scandinavian languages (especially non EU ones like Norwegian) I can assure you that GT is many, many years from replacing us just in terms of competence...

It will be a long time before machine translation overcomes the use of a lingua franca. The meaning of a word in context can strongly differ from the speaker's intention of the use of the word, something that would be hard to program a computer to understand.

I translated the first paragraph of this article from English into Polish then back into English using Google Translate. The result is surprising good:

"Since God has cursed the people of Babel, we are left with imperfect communications solutions across borders. One of these was the lingua franca, commonly known second language, in which different nationalities speak. This trick was enough for thousands of years, but it can reach the end of its life, by Nicholas Ostler, author of "The Last lingua franca" (which the Economist reviewed in 2010). Machine translation software can be so advanced as to make useless learning a second language."

Of course the movement towards machine translation is magnificent and very useful in so many ways…..but in brain science it is known that the further one’s brain relies on man-made machine extensions, the more impoverished the individual brain becomes. For example, one recommended way to delay age-related dementia is to learn a second language. Delegating all your second language processing to a machine will not help your brain. Learning new skills is actually associated with increased brain dendritic complexity and a measurable increase in brain area volume. “Use it or lose it” is a common saying in brain medicine. Yes, use these magnificent foreign language tools but beware impoverishment in one’s own biological capacity ….it would be sad if one became a “vegetable” when relying too heavily on external gadgets.

For spoken communications I'm very doubtful, at least for people who regularly need to speak with people from other countries. Even if an efficient translator was developed, it would never be as easy and simple as speaking directly in a common language (assuming both sides are fairly fluent).

For written texts it's much more open. Machine translation is never going to be perfect, but in many cases that's not required - in many cases the "competition" is not with professionally translated documents, but with documents written by people who speak English poorly. As someone working in a multinational that uses English for most cross-country communications, I see many emails or internal documents where the English is no better than what Google Translate would produce today. And there is often enough embarrassing errors in it too. People who need English only occasionally for mainly written communications might end up being better off just using machine translation instead.

Also, using Google Translate and then fixing its errors is already much faster than translating from scratch. I've started doing that instead of from-scratch translations for internal documents or user manuals - it's maybe ten times faster (and hence ten times cheaper) than pure translation. That method wouldn't be appropriate for translating a novel, but generalising it with improved machine translation could drive down the cost of translations enough to lead to novel uses - maybe in ten years The Economist will be available electronically in five or six languages, each done by one or two translators? And the cheaper the translation gets, the more languages it becomes commercially viable to offer it for.

There is a bias to look out for when judging translations by Google Translate (or any other machine translation system for that matter): In the language pair used, one usually knows both the source and the target language; how can the accuracy of the translation be judged otherwise?

But as a consequence, one is no longer in a position to tell whether the translated text is actually understandable or not, since its meaning was known even before it was translated.

Machine translation tends to be less useful when the source text is unknown.

This may (just barely) help where consultation of foreign language documents is the issue; but for producing them, I predict embarrassing FUs will be the norm. Especially since non-weeded out errors would soon fill the collective translation memory and pollute the target languages, as already happened with carefree human translation, but on a much larger scale.

As a professional translator, I don't feel threatened by machines (yet). Their output is everywhere to be seen, easily identified and frankly inadequate. Contrary to what is implied in the article, legalese (EU or not) can be even more tricky, and of course mistakes can be horrendous (same in contracts, med and tech). Those customers who would value savings over quality are already using low-cost sweat-shop translation and their going over to the machines will not change much for the trade. Expect no improvement on users' manuals from certain countries, though.

English may indeed be superseded as a lingua franca, but not anytime soon and no serious contender is in sight. On the other hand, several "regional" linguas francas (plural ???) could well coexist, as is already the case in multilingual countries and geopolitical units.

today it is actually possible to obtain decent and good text translations when confined to a specific subject matter, like engineering, IT or medicine. A number of "taxonomies" exist that make these translations acceptable.

at the same time, semantic search algorithms have added considerations of grammar and syntax to the statistical ones of words sequences.

it is speach recognition that deals with a real nightmare: try Siri with a Californian or Scottish accent to have a laugh, or give the machine in the hands of an Indian to have embarassement. Indeed Apple have some major rework in progress.

But is is the linear extrapolation concept that is most flawed...you will have a good translator in less than 24 months from now. archive this and check :))