Weather: Putting the Machines to Work

Putting the Machines to Work

One of the great contributions to the second Renaissance in meteorology was mathematician John von Neumann of Princeton's Institute for Advanced Studies. He was instrumental in the development of the ENIAC. The machine was hardly out of the box before it became apparent that it was just what meteorologists had needed for decades. Jules Charney and Norman Philips of MIT began to carve out new territory when they revisited the works of L. F. Richardson and saw that it was a high-speed computer that was needed all along to solve the basic fundamental equations of meteorology. In 1950, the ENIAC made the first computerized 24-hour prediction, and this time, the weather did not move backward at the speed of sound. The modern age of weather forecasting was underway.

Of course, the computers were primitive in the late 1940s and 1950s. The programmable models weren't very sophisticated. Calculations were routinely made for one level, 500 mb, and the very first model made all kinds of assumptions. In order to eliminate the problem of the seventh unknown—heat—no changes in heat were allowed. That is called adiabatic. Also the wind field was nonaccelerating, and the wind always blew parallel to the contours. That is called geostrophic. Then the most phenomenal assumption of all claimed no temperature change. The temperature in Florida was assumed to be the same as in Montana. This strange assumption was needed to make the set of equations solvable—even with high-speed electronic computers. This situation of no temperature change is called barotropic.

Weather-Wise

Among the many computer models, progs, that predict the future positions of weather systems the earliest models have been the baroclinic model, the barotropic model, and the LFM (limited fine mesh) model. They show the flow at 500 mb for 12, 24, and 36 hours into the future. More modern operational forecast models show the atmosphere in great, as well as project ahead for two weeks or more.

But amazingly, it worked—at least on a constant pressure surface at 500 mb. At that level, the temperature contrast is less than near the ground, and the winds do run parallel to the contours. Such a forecast didn't exactly show where it might rain or snow, but it did surprisingly well in projecting the changes of the 500-mb contour field. From that as well as insight about relationships between weather on the ground and air flow at 500 mb, a better idea of upcoming events could evolve.

All this early barotropic model did was shift the 500-mb wind field around along with the Highs and Lows. It didn't create anything new. How could it? There is no heat input, which is the energy that drives the atmosphere.

Yet this very simple, early barotropic computer model became the mainstay of computer projections for more than 25 years—even when more advanced products were made available. For example, another model called baroclinic eventually came along and did include temperature change. Now storms could develop and dissipate because there was a source of energy. This model was made possible as computers became more sophisticated. Still the simple barotropic model proved to be more accurate and more reliable, time and time again.

As computer chips began to replace transistors in the 1960s and computers became more efficient, it became possible for weather engineers to run Richardson's entire experiment on the machine. It was feasible to use the basic set of atmospheric equations and come up with a solution for tomorrow's weather. These basic equations became known as the primitive equations, and the model became known as the primitive equation (PE) model. Of course there would still be assumptions. The computers weren't that smart. And there is always a problem with that seventh unknown—heat.

But for the first time, at least, the pure expressions could be tackled with a minimum of approximations. Also calculations were done at a number of levels, and the levels provided a good cross-section of what the atmosphere was doing.

During the mid and late 1970s, the PE model was refined further, and more refinements followed during the 1980s. The data was now placed on a grid system with a spacing that was less than 60 nautical miles. The first of the finely grid models was called the limited fine mesh (LFM), and just like the early experience with UNIVAC and CBS, its products were not initially warmly embraced. But a big snowstorm in 1978 changed that.

Weather-Speak

The LFM (limited fine mesh) is a computer model for predicting weather systems and has a closer grid spacing and more data points than two other models—the barotropic and the baroclinic models.

On February 3, 1978, the new LFM computer model projected the development of a massive storm in the Atlantic waters southeast of Florida. The storm was projected to move northward and turn into a howling nor'easter. However, even in 1978, the computer models were not used with exceptional confidence. Forecasters still liked to do it the old-fashioned way—they liked to look, sense, and feel their way through reams of hand-drawn charts. The primitive equation products were not that widely accepted, so when the LFM projected a massive East Coast snowstorm, it was greeted with a collective yawn. The next day, I was interviewed by a radio station and described a potential blizzard arriving on the following day. Before my snowy interview was played on the air, the station played an interview with one forecaster from the National Weather Service. The forecaster said, "It's going to snow because of a front coming from Ohio, but there's no blizzard coming." Not all forecasters from the National Weather Service bought into their own computer products.

Well, as it turned out, snow began to fall just before dawn in western New England on February 6, and the rest became history. The Northeast experienced one of the greatest blizzards of the twentieth century. Winds reached hurricane intensity, massive floods swamped coastal communities, and the average snow depth was more than two feet. That didn't include drifts, which were monumental—five feet or more. Traffic was completely paralyzed. To most people, the magnitude of the storm was a total surprise. In the Boston area, commuters were on their way to work when the snow began to fall. Major arteries became snarled with stalled, blocked traffic. The storm raged through February 8.

On February 9, the computer products had some brand-new believers. Because of this blizzard, computer products became more and more part of the everyday forecast.

Too Much of a Good Thing?

The LFM model of the 1970s was followed by even more sophisticated versions in the following 30 years. Thanks to advances in computer technology, numerous, detailed computer forecast models have come along. More than a dozen models from the United States, Europe, and Canada are available on a daily basis. Private companies and universities have developed their own computer forecasts, too. Also some products provide an ensemble, or average, of the different outputs. The consensus version is often more reliable than an individual run. Forecasters have become more and more dependent on the many computer products. Lately the trick is to find the set that works best in a particular situation. A daily study of all the available products can be so time consuming that the deadline for issuing a forecast can come and go before all the perusing is done. As we saw in "Let's See How It's Done," it's possible to simply rip and read the computer output, and the forecast will be very presentable. The 24-hour accuracy of the computer product runs about 85 percent. There's nothing wrong with that. However, the remaining 15 percent can make or break an operational forecaster.

We still have a set of equations that are too few for the number of variables. Heat, the development factor, is still a puzzle. It remains the extra unknown. If the basic set of six equations is expanded, the number of variables expand. There is always one too many variables. No matter how large the computers may become, how fast they may compute, or how much storage they may have, the theoretical stonewall remains. A dynamic, objective, 100 percent accurate solution remains elusive. New technology can feed an infinite amount of data into those machines, but the basic theoretical problem stays with us.

Within that 15 percent error window come the most dynamic weather systems on the face of the earth, the ones that a totally computer-based forecast will have difficulty predicting. And these are the ones that attract the most attention. People are watching and listening when a hurricane approaches the mainland. By the time a forecaster delivers all the options, the viewer is often totally confused. The forecaster might say, "On the one hand, such and such will occur. On the other hand, this will occur." It goes on and on. The simple fact is that the forecaster and the computer are stumped. Other developing situations such as blizzards fall in the same area.

In early March 2001, computers projected a massive snowstorm for the east coast. Major cities from Boston to Washington were shutdown in anticipation of the "storm of the new century." Travel was restricted. Businesses and schools were closed. After all, there was plenty of talk of this storm matching the February 1978 blizzard, even the great blizzard of 1888. But this storm never lived up to its billing. Snowfall was little or none from New York City southward, and in New England, the storm turned into just a typical bout of winter weather. Anyone who just depends on the computer product will miss many of the big ones, even with the advanced models.

Weather-Wise

Long range computer products project general characteristics of the atmosphere, but they have trouble providing accurate specific local weather predictions. This is where we need the human touch.

Ironically, in the pre-computer era, in the late nineteenth century, an experienced forecaster would be 85 percent correct. Now an inexperienced forecaster can have the same level of accuracy by looking at a computer product. But to go beyond that, a forecaster really has to go back to basics—looking, sensing, and even drawing some maps. There's never any substitute for experience, even in the computer age. Just go on TV and try it.

In addition, the atmosphere by its own nature is unpredictable. During the 1960s, another MIT meteorologist, Edward N. Lorenz, developed the concept of chaos, whereby small initial deviations or disturbances, even the flapping of butterfly wings, magnify and become overwhelming in the mathematical solution of the atmosphere. The atmosphere is just too chaotic. More data points and more calculations may not do the trick. The computer would have to be the size and capacity of the universe. A forecaster can't develop a reliable product without a good measure of intellect, intuition, mental analogs, experience, and nerve. Those are the intangibles.

The Machines Are Getting Better

I have slammed the computer enough. Even though a computer forecast for the next day's weather may not be sufficiently accurate to stand alone in critical situations, the computer forecasts have improved in accuracy and detail over the years. For example, during the 1990s, the 36- to 60-hour forecast of precipitation became at least as accurate as the 12 to 36-hour prediction of the 1970s. Also, during the 1990s, the three-day forecasts of general low-pressure development and position became as accurate as the 36-hour prediction of the late 1970s. These products show the atmosphere in great detail from the surface of the earth to the stratosphere. In addition, monthly and seasonal forecasts have become more reliable with the increased knowledge gained from understanding features such as El Nio and La Nia. On even the longer scale, super-computers which can repeat calculations with different sets of initial data are being used to deliver climate forecasts for the next 100 years. It's enough to make one dizzy.

Still, for small-scale occurrences such as tornadoes, flash floods, and hail storms, along with details of upcoming precipitation patterns, the computer forecasts have not shown great improvement. The shortcomings remain linked to the limits of our understanding. Yet advanced technology that uses observational data including Doppler radar systems is able to provide a better lead time for local severe weather warnings. Tornado warning can now be issued as much as 20 minutes in advance, which can save lives.

book cover

Excerpted from The Complete Idiot's Guide to Weather © 2002 by Mel Goldstein, Ph.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.