The Evolution Of Machine Translation

Machine translation refers to a subfield of computational linguistics. It comprises the use of a software to translate speech/text from one language to another without any human intervention.

How did it begin?

The earliest examples of Machine Translation (MT) Systems were government funded. The
exorbitant cost of computers meant there was no personal computing market. The technology
required for it was far too expensive for MT to become an economical business pursuit.

Types of Machine Translations

Rule-based

Statistics-based

Neural Machine Translation

The original goal behind MT was to build computers that would perform Rule-basedtranslation independently. In other words, the machine would be taught full vocabulary &
grammar of multiple languages, so it may translate autonomously.

In 1954, the IBM 701 successfully translated 49 sentences on the topic of chemistry from Russian into English. Around this time, MT saw a shift in catering from military to civilian interests. Globalization called for greater integration of the markets worldwide. A basic understanding of the languages spoken across the world became a necessity for any commercial organization.

In 1997, the free online translation service, BabelFish facilitated translations between the
languages of English, Germany, French, Spanish, Portuguese and Italian. It came with its share
of performance issues. It got frequently confused when asked to choose between words that had varying meanings in the target language. Rule-based MT systems lacked extra-textual
knowledge. For example, it would get mixed up if asked to answer whether a boat sailed on a “Sea” or “See.”

The second stage of MT came in the form of Statistical MT. It is the same technology used to
power Google Translate. SMT works on the idea that if you feed enough data to the computer’s
expansive memory in the shape of parallel texts in two languages, it will be able to spot and
recreate the statistical patterns between them. It becomes self-learning by feeding its machine to grow its corpus. Google Translate also has a Feedback option for users to suggest better
translations by which its system can grow its treasury of words & phrases to draw upon later.

While an improvement upon the RBMT, SMT only recognizes commands while translating. It
fails to translate the significations that humans associate with the language being translated. It excels at translating scientific and technical writing but cannot interpret colloquial or artistic
language. For Example, a lot of traditional Chinese medicine names are unable to be translated as they do not have parallel English terms. They are further tied to Chinese culture which the SMT has no knowledge of.

The philosopher Cicero had said, that translation needs to be “ non verbum de verbo, sed sensum exprimere de sensu” – not word for word but sense for sense.

The last and most recent stage of MT is Neural MT. Neural MT consists of neural networks
trained and optimized to perform translation services. It uses deep learning to analyze vast
amount of translations already performed by human translators. NMT cannot account for whole sentences, understand context, and account for different variations and work with linguistic subtleties that could never be programmed into a statistical model. As a result, NMT is more fluent and natural in its translation. It mimics the workings of the human brain in its ability to learn and form neural pathways. The structure of the neural network makes the system more adaptive to handle complex models than a system based on rules and statistics. It can also learn from its mistakes and adjust accordingly to perform efficiently next time.

NMT has already proved to be miles better than SMT. Yet, there is a long way to go before
human translators can be replaced. The post editing of MT will open up new growth
opportunities for translation service providers. The future of MT will be a symbiotic relationship
between human translators and machine translators.