Announcement

Collapse
No announcement yet.

Nearly Simultaneous Translations Within Ten Years

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nearly Simultaneous Translations Within Ten Years

    The Language Barrier Is About to Fall

    Within 10 years, earpieces will whisper nearly simultaneous translations—and help knit the world closer together




    By ALEC Ross



    It used to be the case when I traveled abroad that I would take a little pocket dictionary that provided translations for commonly used phrases and words. If I wanted to construct a sentence, I would thumb through the dictionary for five minutes to develop a clunky expression with unconjugated verbs and my best approximation of the correct noun. Today I take out my phone and type the phrase into Google Translate, which returns a translation as fast as my Internet connection can provide it, in any of 90 languages.

    Machine translation is leaps and bounds faster and more effective than my old dictionary method, but it still falls short in accuracy, functionality and delivery. That won’t be the case for long. A decade from now, I would predict, everyone reading this article will be able to converse in dozens of foreign languages, eliminating the very concept of a language barrier.

    The most interesting innovations will come with the hardware development for the human interface. In 10 years, a small earpiece will whisper what is being said to you in your native language nearly simultaneously as a foreign language is being spoken. The lag time will be the speed of sound.Today’s translation tools were developed by computing more than a billion translations a day for over 200 million people. With the exponential growth in data, that number of translations will soon be made in an afternoon, then in an hour. The machines will grow exponentially more accurate and be able to parse the smallest detail. Whenever the machine translations get it wrong, users can flag the error—and that data, too, will be incorporated into future attempts.It is just a matter of more data, more computing power and better software. These will come with the passage of time and will fill in the communication gaps in areas including pronunciation and interpreting a spoken response.

    Nor will the voice in your ear be a computer voice, a la Siri. Because of advances in bioacoustic engineering measuring the frequency, wavelength, sound intensity and other properties of the voice, the software in the cloud connected to the earpiece in your ear will re-create the voice of the speaker, but speaking your native language. When you respond, your language will be translated into the language of your counterpart, either through his or her own earpiece or amplified by a speaker on your phone, watch or whatever the personal device of 2025 is.


    Today’s translation tools also tend to move only between two languages. Try to engage in any sort of machine translation exercise involving three languages, and it is an incoherent mess. In the future, the number of languages being spoken won’t matter. You could host a dinner party with eight people at the table speaking eight different languages, and the voice in your ear will always be whispering the one language you want to hear.

    The research and commercialization for these breakthroughs are coming from the intersection of the private sector and the defense and intelligence communities. Siri has its roots in a DARPA-funded artificial-intelligence project. Its speech-recognition engine was developed by Nuance Communications, which quietly provides speech software to 70% of the Fortune 100 and spends more than $300 million a year on research and development in voice biometrics.

    The U.S. National Security Agency and the Israeli National Sigint Unit massively invest in basic research in voice biometrics and translation, spurred on in no small measure by how encryption is making digital communications more difficult to analyze. Much of the intelligence-community research goes into accounting for the local dialects, inflections and nuance that professional translators say make it too complex to be sorted algorithmically. As Israelis involved in this work conclude their full-time military service and as U.S. government employees in Maryland and Virginia answer the call of private-sector salaries, these innovations will cross over into the public domain.

    Universal machine translation should accelerate the world’s growing interconnectedness. While the current stage of globalization was propelled in no small part by the adoption of English as the lingua franca for business—to the point that there are now twice as many nonnative English speakers as native speakers—the next wave will open up communication even more broadly by removing the need for a shared language. Currently, when Korean-speaking businesspeople speak with Mandarin-speaking executives at a conference in Brazil, they converse in English. There will no longer be this need, opening the door of global business for nonelites and a massive number of non-English speakers.

    Nor will these be the only language barriers brought down by the new technology. Machines will also reduce the social isolation of tens of millions of people around the world who have severe hearing and speech impairments. While I was traveling recently in Ukraine, a group of engineering students showed me a shiny black-and-blue robot glove called Enable Talk that uses flex sensors in the fingers to recognize sign language and to translate it to text on a smartphone via Bluetooth. The text is then converted to speech, allowing the deaf and mute person to “speak” and be heard in real time. Before long, the language spoken could be one of dozens chosen from a drop-down menu on the phone.

    The economic benefits of this new technology should be obvious. Machine translation will take markets that are now viewed as being too difficult to navigate and open them up. Consider a place like Indonesia. There are plenty of English, Mandarin and French speakers in Jakarta and Bali, but very few of them on most of the other 6,000 inhabited islands. If one doesn’t need to be fluent in Javanese (or any of the 700 other languages spoken in Indonesia) to do business in those other provinces, then they are immediately more accessible and outside capital is in turn more accessible to them.

    Just across the Banda and Arafura Seas to the east of Indonesia is the resource-rich country of Papua New Guinea. Papua New Guinea is loaded with mineral deposits, agriculture-friendly land and water teeming with valuable seafood (including 18% of the world’s tuna), but its 850 languages scare off most foreign investors.
    Big data applied to translation will change all of this. It will take economically isolated parts of the world and help fold them into the global economy. It will make any of us, in principle, a master of the Tower of Babel.

    —Mr. Ross is the author of “The Industries of the Future,” to be published by Simon & Schuster on Feb. 2, and the former senior adviser for innovation to the U.S. Secretary of State.




  • #2
    Re: Nearly Simultaneous Translations Within Ten Years

    The author is already slightly behind. Checked into a hotel two weeks ago out in the boonies. A screen was setup at the information desk. You swiped a language, spoke, and it spit out Thai good enough for the receptionist to understand. The woman spoke Thai and out came English. She didn't have to swipe at all. The only glitch was when she asked," What name is the reservation under?" Our friend, Tick, had made the reservation, and the app could only think tick as in "tick the box."

    Comment

    Working...
    X