When Hurricane Sandy made a devastating left hook into the Mid-Atlantic on Oct. 29, 2012, killing nearly 150 people and causing about $70 billion in damage, a narrative took hold in the weather community and the media that made its way to Capitol Hill.
U.S. weather models were late in forecasting that storm’s bizarre track when compared to the top model from Europe, which locked onto it more than a week in advance. Many in and out of government began to criticize what they saw as a growing modeling gap across the Atlantic Ocean.
The weather model wars are continuing, and new evidence has emerged that instead of making a leap forward in forecast accuracy as Congress has directed, the U.S. may be about to take a step back, at least when it comes to high-impact events such as hurricanes and tropical storms are concerned.
The issue concerns a looming upgrade to the National Weather Service’s top weather forecasting model, known as the Global Forecast System, or GFS. Staff at the National Hurricane Center in Miami, which issues hurricane watches and warnings, are pushing back against a plan to implement changes scheduled for May after simulations revealed the new version would make hurricane track forecasts less accurate for storm systems spinning in the Atlantic Ocean.
In short, their argument is that no upgrade is better than a bad upgrade, and that if the upgrade goes forward as planned, forecasts will suffer. This could put millions of coastal residents in the path of a hurricane at risk, depending on the forecast error.
The planned changes, which the leadership of the National Weather Service have already signed off on, are highly technical, but they amount to attempts to better capture how the atmosphere works.
Computer models are a mainstay of modern weather forecasting. Each takes in thousands of observations from satellites, weather balloons, commercial aircraft, ground stations and more, and then uses complex physics equations and other techniques to project the state of the atmosphere out several days in advance.
In a series of presentations posted to a National Weather Service website, the Hurricane Center documented a decline in forecast reliability, and made known their objection to putting this new model upgrade into service.
Other centers within the National Oceanic and Atmospheric Administration (NOAA) documented either slight improvements, no change, or slight degradations in forecast accuracy with the next model iteration.
Specifically, when forecasters ran the upgraded model on past tropical weather systems, they found storm track forecasts in the Atlantic were about 9 to 10 percent lower than the previous GFS in service already, and there was also a small drop in the accuracy of track forecasts in the eastern Pacific Ocean.
One slide, which was presented to senior management, summarized the Hurricane Center’s reservations.
It bluntly states: “The loss of short- to medium-range [tropical cyclone] track and intensity forecast skill for the Atlantic basin in the proposed 2017 GFS is unacceptable to the National Hurricane Center.”
The Hurricane Center also expressed concern about how changes in the GFS model would affect other hurricane models that receive inputs from the GFS. Those possible impacts have not been analyzed.
“Therefore, we oppose this implementation,” the presentation says.
In response to questions from Mashable, Bill Lapenta, the director of the National Centers for Environmental Prediction, which oversees the Hurricane Center, said the upgrade is designed to “make strategic architectural infrastructure improvements to move our entire modeling suite forward,” and does not make many science alterations.
“As is typically the case with model upgrades, we expect slight performance improvements in some NWS service areas (i.e., aviation, severe, winter, etc.) and slight degradations in others,” he said in a statement.
Fiddling with a computer model is a bit like trying to put together a piece of Ikea furniture: you may succeed in solving one part of the challenge, but in the process of using that allen wrench you’ll accidentally knock another part out of place in the process. So too do model improvements tend to improve forecasts for some weather phenomena while causing others to be more problematic.
Lapenta said that further investigation revealed that most of the increase in forecast errors with the upgraded GFS model was coming from mistakes in how it handled three particular storms, rather than all storms.
However, that finding was not reflected in a presentation from the Hurricane Center, which showed a more frequent forecast degradation based on more than 700 test cases in the Atlantic. Also, if one of those high error storms in the future happens to be a high-impact storm like Sandy, it could cause major problems for coastal residents in particular.
It would also be another black eye on an already maligned computer model.
In addition, he noted that hurricane forecasters in Miami have access to the full range of data from other models, and aren’t locked into a GFS-based forecast.
Series of tune-ups
The weather agency contends that the model has become far more advanced and reliable since 2012, thanks in part to post-Sandy funding to improve its power and capabilities.
A bill passed in the wake of the disaster provided $48 million for NOAA to improve its forecasting operations, with $25 million going to boosting its computing capabilities to try to bring the GFS on par with the model run by the European Center for Medium Range Weather Forecasts, or ECMWF.
High on the priority list for Congress and the Weather Service has been improving the forecasting of high-impact weather events such as hurricanes. So it is particularly concerning for hurricane forecasters to see a coming deterioration in forecast accuracy.
“After Sandy we spent all this money in supplemental modeling money” to improve hurricane forecasts and better predict the next Sandy further ahead of time, said Ryan Maue, a meteorologist at WeatherBELL Analytics, a private company. “I honestly I don’t know if we ever accomplished that.”
Maue has reviewed the Weather Service presentations and the Hurricane Center’s objections. He says the testing clearly shows a step backwards for the forecasting agency.
“This is a severely negative result that would have real-world impacts on forecasting hurricanes especially in the short term,” he said of the next GFS model upgrade.
“We’re right back to where we started five years ago when we all knew that the GFS sucked,” he said.
The Hurricane Center has not supported some of the other GFS upgrades since Sandy either because they have shown little improvements or even degradations in forecast skill.
He faults the Weather Service for increasing the horizontal resolution of its flagship model without also addressing the vertical resolution, since weather happens both above and at Earth’s surface. The European Center, he says, has superior vertical resolution.
The European Center, for its part, is planning to build a next-generation supercomputing center in Italy and continues to outpace the U.S. in terms of forecast accuracy and computing power. Lessening the impact of this disparity, though, is the fact that American forecasters do have access to the European models’ simulations.
Florian Pappenberger, the director of forecasts at the European Center for Medium-Range Weather Forecasts in Reading, England, said at his agency, there are cases where the model is tinkered with and the results backslide in accuracy rather than taking a step forward.
“It’s a bit of a shame that this seems to be not working out as had hoped,” he said of the GFS model changes. He said sometimes agencies know they will take a hit in forecast accuracy in the short-term in order to reap greater longer-term benefits from future tune-ups.
However, the next upgrade after this one isn’t scheduled until the next, completely new GFS is rolled out in 2019.
During the past five years, NOAA has been busy trying to improve its computing power to bring its forecasting models to parity with the Europeans as well as other groups in Japan, China, Korea and the UK. This includes a jump in computing power in 2016, and a new 4D” system of inhaling data — technically known as data assimilation — into the model.
Critics within and outside the agency have long said the Europeans have a superior way of ingesting data and turning it into useful information for its models.
In late July of 2016, NOAA announced its intention to build a new, “state-of-the-art” global forecasting model to eventually replace the current GFS.