How Forecasting Really Works

By: Joshua Steiner

Forecasting is a process that is often thought of as a guessing game by non-meteorologists (and many meteorologists alike). And in some instances, this is correct. But for the most part, the science of meteorology has gained tremendous ground within the last fifty years, in scientific discovery, in observation networks and platforms and especially in the realm of something called numerical modeling, the real basis for forecasting in the
modern era.

Prior to the 1950’s, most forecasting was based on rules of thumb and general pattern recognition learned by scientists over the past several centuries. Prior to the early 1900’s, most forecasting was based entirely on rules of thumb and old wives’ tales, with little or no success whatsoever. Despite this, the basic mechanics of the atmosphere were well understood in the 18th and 19th centuries, especially by those who were pioneers in the field of fluid mechanics, the field of study that meteorology is based off of. As you can imagine, accurate weather predictions past a day or two in advance were basically impossible at this point.

In the few years prior to World War I, atmospheric scientists like Vilhelm Bjerknes began research on new meteorological topics, including the possibility of forecasting a future state of the atmosphere on the basis of a few diagnostic and prognostic equations. Bjerknes and other Scandinavian meteorologists/atmospheric scientists pioneered research on what is known as the “mid-latitude cyclone”, a model of how storm systems work and why weather changes so rapidly in some instances. This helped improve weather prediction beyond merely rules of thumb. With the conceptual model of how a warm front, a cold front and an occluded front behave, meteorologists could make predictions of precipitation patterns, storm system movement and make temperature forecasts, something that previously wasn’t possible.

In the early 1920’s, Lewis Richardson made the first attempt at predicting the future state of the atmosphere using numerical forecasting. His attempt failed but when large computers with at least decent computing speed were made available in the early 1950’s, numerical weather modeling became much more popular and in fact, became very efficient at forecasting the future state of the atmosphere even three to four days in advance.

In the modern era, forecasts are almost entirely based upon a combination of the forecaster’s meteorological knowledge and the skill of the forecast model. Each computer model solves a distinct set of equations that predicts the future state of the atmosphere and does this for millions of grid points for the entire world. There are other types of models that make forecasts for limited areas, such as North America, and these models tend to be used more often in research and in severe storm forecasting. Much of the forecasts you see from the National Weather Service and other private-based weather forecasting agencies like The Weather Channel generate point-based forecasts based on bias corrected model forecasts and blends of other types of numerical guidance. One type of bias-correct model forecast (done through the use of statistical regression equations). These forecasts show up as numerical data and the computer systems at each National Weather Service office interprets these forecasts into graphical maps and user-friendly interfaces. This is a fast way in which forecasters can present useful information to the public.

Model forecasts are far from perfect, however. The biggest issue that modern numerical modeling faces is the issue of convective thunderstorms and how to forecast them. This is because most models (some can) cannot explicitly forecast convection with how large the grid spacing between models is. And grid spacing is a function of both the computational resources and what is reasonable for a forecast to make. This can be easily illustrated with any summertime convection event, especially over the past few months. Models, even those that can forecast thunderstorms, have a hard time forecasting storm motion and development because these features occur on such small scales. In fact, it is well known that human forecasts do a much better job at forecasting thunderstorm development than current models do. As long as models have these problems, forecasts will need the skill and atmospheric expertise that forecasters bring to the table, especially when it involves forecasting events like squall lines, supercell, tornadoes and major flooding events, all of which are difficult to anticipate more than a day in advance.

One day, it may be possible that numerical models come close to being completely accurate. However, it has been theorized that there is a limit to the predictability of the atmosphere. It is often supposed that the longest we could ever really make explicit predictions for would be 16 days out. This is because atmospheric motion is fundamentally chaotic when it comes to small scale features. Even small ‘blips’ in the wind field and can have major consequences down the road in the atmosphere. Compound this with the issue of numerical instability when it comes to solving equations and it doesn’t seem like we will ever arrive at perfect forecasting. Until then, human forecasters are always trying to improve their own ability to make sense of the weather and anticipate dangerous events that may happen.