Google’s AR translation glasses are still just vaporware

Google’s AR translation glasses are still just vaporware

At the end of its I/O presentation on Wednesday, Google pulled out a “One More Thing” surprise. In a short video, Google showed off augmented reality glasses that have one purpose – to display audible language translations right in front of your eyeballs. In the video, Google product manager Max Spear called this prototype’s capabilities “closed captions for the world,” and we see family members communicating with one another for the first time.

Now just wait a second. Like many people, we’ve used Google Translate before and for the most part think it’s a very impressive tool that causes a lot of embarrassing misfires. While we trust it to show us the way to the bus, it’s nowhere near the same as trusting it to correctly interpret and pass on our parents’ childhood stories. And didn’t Google say it was finally breaking the language barrier?

In 2017, Google marketed real-time translation as a feature of its original Pixel Buds. Our former colleague Sean O’Kane described the experience as “a commendable idea with a deplorable execution” and reported that some of the people he tried it with said it sounded like he was a five year old. That’s not quite what Google showed in its video.

Nor do we want to overlook the fact that Google promises that this translation will take place in AR glasses. Not to hit a sore point, but the reality of augmented reality hasn’t really caught up with even Google’s concept video from a decade ago. You know, the one that acted as the precursor to the much-maligned and embarrassing-to-wear Google Glass?

To be fair, Google’s AR translation glasses seem a lot more focused than what Glass was trying to achieve. From what Google has shown, they’re meant to do one thing – display translated text – and not act as an ambient computing experience that could replace a smartphone. But even then, making AR glasses is not easy. Even a moderate amount of ambient light can make viewing text on translucent screens very difficult. It’s difficult enough to read subtitles on a TV when the sun is shining through a window. Now imagine that experience, but strapped to your face (and with the added pressure of engaging in conversation with someone you can’t understand on your own).

But hey, technology moves fast — Google may be able to overcome a hurdle that has been holding back its competitors. That wouldn’t change the fact that Google Translate isn’t a panacea for cross-language conversations. If you’ve ever tried to have a real conversation through a translation app, then you probably know that you have to speak slowly. And methodical. And sure. Unless you want to risk a garbled translation. One slip and maybe you’re done.

Humans don’t converse in a vacuum or like machines do. Just as we change code when we talk to voice assistants like Alexa, Siri or Google Assistant, we know that when we’re dealing with machine translation we need to use much simpler phrases. And even if we speak correctly, the translation can still be awkward and misunderstood. Some of ours edge Colleagues who are fluent in Korean pointed out that Google’s own pre-roll countdown for I/O displayed an honorific version of “Welcome” in Korean that nobody actually uses.

This slightly embarrassing flub pales in comparison to the fact that according to tweets by Rami Ismail and Sam Ettingershowed Google over half a dozen wrong, buggy, or otherwise incorrect scripts on one slide during his Translate presentation. (Android Police notes that a Google employee acknowledged the error, and that it was corrected in the YouTube version of the keynote.) To be clear, it’s not that we expect perfection — but Google is trying to tell us that it’s close to cracking the real-time translation, and bugs like this make that seem incredibly unlikely.

Google is trying to solve one enormous complicated problem. Translating words is easy; Understanding grammar is difficult, but possible. But language and communication are much more complex than just these two things. As a relatively simple example, Antonio’s mother speaks three languages ​​(Italian, Spanish and English). Sometimes, mid-sentence, she borrows words from language to language—including her regional Italian dialect (which is like a fourth language). Something like this is relatively easy for a human to analyze, but could Google’s prototype goggles handle it? Don’t bother with the messy parts of the conversation like unclear references, incomplete thoughts, or innuendos.

It’s not that Google’s goal isn’t admirable. We desperately want to live in a world where everyone can experience what the research participants do in the video and gape in amazement at the words of their loved ones appear before them. Breaking down language barriers and understanding one another in ways we couldn’t before is something the world needs much more of; It’s just that there’s still a long way to go before we get to that future. Machine translation is here and has been for a long time. But despite the wealth of languages ​​it can handle, it still doesn’t speak humans.


Leave a Reply

Your email address will not be published.