Category Archives: QA standards

Ever took a robocall? Pretty annoying, huh? A prerecorded message sounds on the other end of the line after a machine calls your phone number at dinnertime. The next morning, you need to call the DMV because you changed addresses. It is seldom a live human voice who answers the phone. We are all painfully familiar with the stock phrase offering us a language option: “Press 1 for English, press 2 for Spanish…»

AT&T and other companies save a lot of money by using IVRs (interactive voice response systems). A computer, not a human operator, interacts with a caller and responds/routes calls according to the nature of the query. Call centers fully operated by humans are costly to run. One way to reduce this cost is to outsource the customer service (or technical support) to a cheaper call center overseas. You or someone you know have already experienced this in the form of a support call for a company like Dell Computers taken overseas by a India-based call center. The guturally-accented English is noticeable. I have personally met some people at call centers in Córdoba, Argentina. Among their customers are cellphone companies. These employees have studied British English in college, which shows as a slight accent. This can be very annoying to a customer who is already irate about poor service.

In the genial movie Wargames, the term machine is mentioned by various characters in different situations, but the viewer gets the impression that there’s a question mark attached to the seemingly evident advantages given by our wondrous technologies. In the movie, a fully-automated computer system called WOPR (War Operation Plan Response) is in charge of controlling the launching of nuclear missiles, eliminating the need of human intervention. At one point, the WOPR detects (erroneously) that a Russian nuclear missile attack is under way. General Beringer, in charge of NORAD, asks Dr. McKittrick what the WOPR recommends, and the response is “Full-scale retaliatory strike.” Bemused and sarcastic, Gen. Beringer responds “I need some machine to tell me that?”

The WOPR system at NORAD in the movie Wargames

This could point to one of the morals of the story: Do we need a machine to tell us the obvious? Take some feature of your word processor, for example: if the spellchecker says it’s okay, it must be okay, right? And some automated features can become a hindrance to productivity and performance. I had a taste of this last week when I was readying a file to be exported back to Word format for a rush job. Because of some incomplete or corrupt codes, which I couldn’t immediately fix, the program repeatedly and consistently failed to export the file. It took me a few minutes of fiddling with the options until I fixed the problem, running against the clock.

Had I translated the document in plain wordprocessing fashion, with no CAT tool at hand, I would have not faced a corrupt code problem to begin with. But we translators also love technology, and the occasional hiccup is the price we pay for a more streamlined (irony intended) performance.

A few months ago, I ran into a more intractable problem. I was setting a Burmese translation in an InDesign CS3 document. Not knowing Burmese —a beautiful script, with elegant strokes and fanciful characters—, I first struggled with the correct font to display the characters correctly and then with the ligatures so that the words connected properly. Had I worked with a handwritten copy, I would have just erased the offending stroke, line or letter and rewrite. But a complex software like InDesign automates things like ligatures, kerning and other font features. It took me hours to get things right. Despite my technical knowledge, I still had to send a PDF copy in Burmese for approval by a human Burmese translator to make sure the script looked right, prior to final delivery.

You trust your dryer to do a proper job with your clothes but, would you trust a robot to paint your house? Surely you do online banking and do your taxes with the help of software, but, would you depend on artificial intelligence or ask a machine for financial advice? If you are single and looking, would you ask your friends to match you up with someone or would you trust online software in a dating site to match you up with someone? Would you carry on a love conversation with an Internet bot? Would you trust your company’s marketing tagline to a piece of software? Will you let software write up a sports column?

Actually, the latter scenario is already possible, thanks to Narrative Science‘s software. Last month, I spoke with Larry Adams, one of Narrative Science’s representatives, about the main features of their program, which mines data to author a piece of writing that is basically undistinguishable from what a human writer would create.

What if you need an email written in Mongolian translated into English in a rush? Enter Google Translate or any other number of software solutions, powered by machine translation. What drives the translation of large volumes of content, or bulk translation, is speed, not quality. Large companies that can afford the expense of custom-built machine translation software solutions already create multilingual versions of their technical documentation. Companies with a smaller wallet have to content themselves with us, human translators. For the sake of argument, I’ll oversimplify the issue a little bit. There are large translation companies that operate in bulk and outsource language services to the cheapest providers, from India to Argentina. Other companies try to stay competitive by emphasizing quality, then hire a more costly professional workforce in developed countries. The downward push on translation costs continue. After all, translation is usually viewed as a necessary cost of doing business, like buying office supplies or ordering printer ink cartridges.

While American business owners recognize the need and advantage of addressing the translation of documentation for their products or services, it is difficult for them to see the direct connection between higher sales and better-written translations. Hence, the advantages of quality translation are intangible ones, noble concepts in an abstract world. Companies with overseas offices trust their salespeople in the different geographies to check the accuracy of the translated documents. In-country reviews are an established quality control but translation managers often face an uphill battle to perform these reviews according to quality translation standards because the reviews’ completion depends on the time and availability of the reviewers —the people who are in charge of selling and marketing the products. Their main job is to market and sell, not to sit down and review translations, a task that is not a natural part of their role.

In the meantime, companies are offered a variety of technologies to automate most of the translation process: translation memories, terminology databases, automated quality controls, confirmed translations with lock-out of changes (so that future translators or editors cannot modify them once approved), and, of course, machine translation. As machine translation reaches new, more solid performance markers, a question insinuates itself: Will it be possible to completely automate the creative process of translation by sheer data mining and parsing of linguistic patterns in the corpus?

There is an intriguing article on self-driving cars in the latest issue of WIRED magazine. In the near future, it would be possible to let the driving to an advanced vehicle. Software solutions devised by companies like Narrative Science may make the high cost of writing standard sports news and financial articles a thing of the past, once the engine is properly customized. There seems to be a technological answer to our most pressing problems. Will translators be relegated to the mere role of editors, no more creators of original translations?

Machines and software, regardless of their level of automation, still operate in a GIGO fashion (garbage in, garbage out). The machine is no better than the operator that programmed it. Intuition, creativity, the right turn of phrase, the cumulative good judgment that comes from years of writing experience cannot be automated. Your business uses complex software and complex machines to churn out products and project sales. But, who do you turn to for sales, marketing or financial advice vital to your business? A machine? A software bot?

Towards the end of the movie Wargames, General Beringer faces a crisis. The highly sophisticated WOPR system warns of an impending Russian attack in the form of 2100 missiles, which may or may not be a simulation. The general is torn between ordering an attack for real and assuming that it’s a computer game gone awry, while the U.S. president is waiting for a decision on the phone. The creator of WOPR, Stephen Falken, reasons with him in this moment of terror:

-General, you are listening to a machine. Do the world a favor and don’t act like one.

I am an avid Mythbusters fan for scientific, entertainment and, now, language reasons. In a recent episode (or rerun, not sure), Adam explained the rationale behind his motto ‘Failure is an option’. Paraphrasing his explanation, he says that scientists do not look at failure as, well, failure but as a learning experience. The purpose of scientific experimentation is data collection to determine the feasibility of a certain process, for example, testing the tensile strength of a certain type of steel fibers.

He added (or I think it was Jamie who said this…) that the point of scientific experimentation is not to succeed in every attempt but to learn from the information acquired from the ‘misses’. I agree. I also found this reasoning to be a fresh outlook on translation errors. Phrases like “perfect translations” or “error-free translations” fill websites and business communications in our industry.

Why are we so afraid of making mistakes? Why do we make the colossal error of equating absence of translation issues with high quality translation?

This posting is a follow-up to a previous one regarding Rethinking Translation QA. What are your thoughts?

After completing two translation tests for a prospective customer, I was given some feedback. It was not what I wanted to hear. From ‘translator is not quite familiar with the industry terminology’ to ‘needs supervision’, the comments were stinging. Why would I feel bothered by an anonymous critique, you’d say? For the same reason a stranger tells you that you don’t know how to run your business.

I wrote back to my prospective customer and expressed my frustration at the inconsistency between the level of the criticism and the kind of “errors” found in my translation tests. The main point I tried to make was that many of the “errors” were preferences of the translators or editors who checked my translations. Weeks later, I received an email expressing concern, approval of my vendor status and an offer to do better at communicating. I replied my prospective customer by saying that, apparently, she takes translation test results only as one of many factors to decide who to hire as a freelancer. The message reads as follows:

I definitely do not just use the errors in the sample to determine the approval of a translator. I take into account many different things. I even take into account the tone and wording of the e-mails and telephone conversations in general. That tells me a lot about a person. A sample of 350 words is hardly enough to base my entire judgment on.

What weight does this have on translation quality control? That error counting does nothing to tell you what you need to know about a freelance translator. I’ve been thinking about the whole business case for implementing translation quality standards, and I think that some in the industry are so focused on finding errors that they fail to see the tree from the woods.

For QA to work in any field, it has to offer practical, cost-effective instruments to measure things. But you need to find measurable things. Languages are not like math or geography or archeology. How do you measure a language? How do you even measure if a document is well written? By counting the typos or syntax errors? Then how do you measure style?

I posit that none of this can be measured in any meaningful way. I propose a different way to ‘measure’ translation quality: effectiveness.

Now you’ll tell me ‘But effectiveness cannot be measured!’ And you might be right…to a point. Let’s imagine marketing campaigns. An effective marketing campaign is the one that increases sales, name recognition, gets people to talk about your company and your product. A similar strategy can be employed for translation effectiveness. That the focus is on business results is the beauty of it.

This is an ongoing analysis and it is a work in progress. I am not claiming to have found the ultimate solution to measuring translation, but my experience strongly suggests in my mind that we are going about it the wrong way. Go ahead, measure words and errors all you want. You will end up empty-handed.