Google: Looking Ahead to the Next Decade of AI Research
Wed, April 14, 2021

Google: Looking Ahead to the Next Decade of AI Research

Multi-modal learning is said to be a growing trend / Photo Credit: metamorworks (via Shutterstock)

 

Google showcased different AI research projects this week to help it pursue its goal as an “AI first” company, reported Stephanie Condon via ZDNet, a business technology news website. In some cases, like the tech giant’s translation research, the commercial applications for the company already exist. However, in other areas like interactive textiles, the practical use cases are not that clear yet. But Google’s AI researchers are focused on the bigger picture, said Google AI chief Jeff Dean. “We try to do long-term work, and often that provides an arc of direction, where along the path of an eight- to 10-year journey we throw off useful results [for commercial applications]... and then continue to work on those harder problems,” he explained.

Dean highlighted a few of the more interesting problems and opportunities that AI researchers will resolve over the next 10 years. For instance, multi-modal learning is said to be a “growing trend,” he informed ZDNet. He said whether you see a photo of a leopard, hear the word “leopard,” or you see the word written, “there's some common response in a model that helps you understand the properties of leopards, [such as] what they look like.” Another challenge that Dean mentioned is the ability to take machine learning models that previously ran on large server-based setups and run them on device. But Google is already taking significant steps in this area, with its on-device translation services offered in 59 languages.

As for its interactive textiles, Google showcased how it could literally weave AI into wearables. Researchers showed the I/O braid, a touch-sensitive cord that could be utilized as an output device on garments or for wearable electronics. For example, the I/O braid could twist hoodie drawstrings to tap your headphone cord to play the next song or control your phone.

It is made from conductive yarns sensitive to touch. Some of the cords shown during the demo were optic fibers to enable visual feedback. Google’s research is based on the centuries-old practice of creating different structures using interwoven yarns and textiles. Google senior research scientist Alex Olwal explained to ZDNet, “We're leveraging these structures as part of the algorithm to simplify the type of gesture recognition we're doing."