Google's AI "Instantaneously" Predicts Local Precipitation Patterns
Thu, October 21, 2021

Google's AI "Instantaneously" Predicts Local Precipitation Patterns

The researchers took on a physics-free and data-driven approach / Photo Credit: Vera Larina (via Shutterstock)


Google is trying to tap into AI and machine learning to make local weather predictions faster, reported Kyle Wiggins of American technology news platform Venture Beat. Google has detailed an AI system that utilizes satellite images to generate “nearly instantaneous” and high-resolution forecasts with a roughly one-kilometer resolution and a latency of only five to 10 minutes on average. The AI system takes a data-driven and physics-free approach to weather modeling, learning from approximate atmospheric physics from examples alone and not by incorporating prior knowledge.  

The Google researchers explained that a convolutional network consists of a sequence of layers where each layer is a set of mathematical operations. It’s a U-net because the layers are arranged in an encoding phase that reduces the resolution of images passing through them. A separate decoding phase expands the low-dimensional image representations produced during the creation phase. For the researchers’ initial work, the engineering team trained a model from historical observations over the US split into four-week chunks between 2017 and 2019, a portion of the time was dedicated to evaluation. They compared the model to three baselines, namely the High Resolution Rapid Refresh (HRRR) numerical forecast from the National Oceanic and Atmospheric Administration, an optical flow algorithm that tried to track moving objects through a sequence of images, and a “persistence model in which each location was assumed to be raining in the future at the same rate it was raining then.”

The researchers reported that the quality of their system was better than all three models, with HRRR outperforming it when the prediction horizon was about five to six hours. However, it had a computational latency of one to three hours, which was significantly higher than that of theirs. They said, “The numerical model used in HRRR can make better long term predictions, in part because it uses a full 3D physical model.” Cloud formation is harder to observe from 2D images, so it is more difficult for machine learning methods to learn convective processes. The researchers leave future studies to apply machine learning to 3D models.