Google’s I/O conference offers a modest vision of the future

Google’s I/O conference offers a modest vision of the future

SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented reality glasses, unlimited email and photo storage, and predictive text to complete running sentences.

A more modest Google was on display Wednesday as the company kicked off its annual developer conference. The Google of 2022 is more pragmatic and sensible — a bit more like its business-focused Microsoft competitors than a fantasy playground for techies.

And that is apparently on purpose. The bold vision is still out there – but it’s a long way off. The professional executives who now run Google are increasingly focused on squeezing money out of those years of research and development spending.

The company’s biggest bet on artificial intelligence doesn’t mean sci-fi coming to life, at least for now. It means more subtle changes to existing products.

“AI improves our products, making them more helpful, more accessible and delivering innovative new features for everyone,” Google chief executive Sundar Pichai said on Wednesday.

In a presentation without wow moments, Google emphasized that its products are “helpful”. In fact, Google executives used the words “help,” “help,” or “helpful” more than 50 times during two-hour keynote speeches, including a marketing campaign for their new hardware products with the line, “When it comes to helping, we can’t help it.” than to help.”

It introduced a cheaper version of its Pixel smartphone, a round-screen smartwatch and a new tablet that will be released next year. (“The World’s Most Helpful Tablet.”)

The biggest applause came from a new Google Docs feature, where the company’s artificial intelligence algorithms automatically summarize a long document into a single paragraph.

At the same time, it wasn’t immediately clear how some of the other groundbreaking work, like language models that can better understand natural conversations or break down a task into logical smaller steps, would ultimately lead to the next generation of Google touted.

Certainly some of the new ideas seem to be helpful. In a demonstration of how Google continues to improve its search technology, the company showed off a feature called “Multisearch,” where a user can take a picture of a shelf full of chocolates and then find the top-rated nut-free dark candy bar in the image.

In another example, Google showed how you can find an image of a specific dish, e.g. B. Korean fried noodles, and then search for nearby restaurants that serve that dish.

Many of these capabilities are based on the deep technological work Google has been doing for years using what it calls machine learning, image recognition, and natural language understanding. It’s a sign of evolution rather than revolution for Google and other big tech giants.

Many companies can build digital services more easily and quickly than in the past due to shared technologies like cloud computing and storage, but building the underlying infrastructure – such as artificial intelligence language models – is so costly and time-consuming that only the wealthiest companies do it can invest in them.

As is often the case with Google events, the company didn’t spend much time explaining how it makes money. Google brought up the issue of advertising – which still accounts for 80 percent of the company’s revenue – after an hour of other announcements, highlighting a new feature called My Ad Center. This allows users to request fewer ads from certain brands or highlight topics they want to see more ads on.

Leave a Reply

Your email address will not be published.