Cloud over forecasts - Debate opens on forecasts' merit
GroundCover™ Issue: 50
The value to grain growers of acting on seasonal rainfall forecasts issued by the Bureau of Meteorology has been questioned in a high-profile study, opening up a rigorous debate in meteorological circles.
The researchers from The University of Melbourne say the existing seasonal rainfall forecasting system could not have reliably added significant value to any decisions made by growers and estimated that growers who routinely acted on the forecasts would have experienced from small losses to, at best, small gains.
In the study, which is yet to be published in scientific journals, they say that a substantial improvement in the skill of the seasonal forecasting system is required to improve returns to farmers and other end users. However, the bureau is defending its performance, saying its forecast system is one of the best of its type in the world.
Researchers Andrew Vizard, Garry Anderson and David Buckley arrived at their conclusions using a meteorological assessment method, called value scoring, recently developed in the United States. They say this was the first time that this rigorous approach to assessing the value of forecasts to end users had been applied to Australian meteorological data.
In their research they examined the historical value of the seasonal rainfall forecasting system, and also estimated the increased value to farmers that would accrue from any possible improvement in forecasting skill.
They are preparing to publish scientific papers on their research, which was presented at a symposium at the CSIRO"s Division of Plant Industry in December sponsored by The Hermon Slade Foundation.
However, experts at the bureau are questioning the methodology the researchers have used to verify the forecasts, which involved comparing every seasonal rainfall forecast issued by the bureau since observations at a township level were started in 1997 with the actual results at 256 towns, and then using the resulting 5731 observations to assess the skill of the forecasts.
The bureau says that six years is too short a time frame on which to base any conclusions. Its forecasting systems are based on 50 years of observations, with forecasts for each year being tested against the remaining 49 years.
In addition, the bureau says it changed its system during the study"s period, thus affecting the observations used. It points out that the modelling was based on 24 quarterly forecasts that covered the whole country during the six-year period, that were then interpreted at township level for the benefit of users. It says this could not be regarded as hundreds of independent forecasts and the statistical significance of the results is difficult to assess.
The bureau maintains that many externally produced studies by organisations such as the Queensland Department of Primary Industries have confirmed the economic value to agricultural and other industries of its seasonal rainfall forecasts.
Neil Plummer, supervising meteorologist at the bureau"s National Climate Centre, says that among statistically-based models, the bureau"s is one of the world"s most sophisticated systems.
The bureau is now moving towards the introduction of a dynamic computer model that draws on current observations from the atmosphere and oceans, similar to that which now produces daily weather forecasts.
Mr Plummer says the bureau regularly seeks feedback about how to improve its provision of information to agriculture through workshops, seminars and field days, such as a drought debriefing it held last year.
The University of Melbourne study"s co-researcher, Garry Anderson, explains the reasons for undertaking the study: "Australian agricultural producers, such as those in the grains industry, need to manage their production risk, which is driven by the high seasonal uncertainty in rainfall. From earlier research we concluded that seasonal forecasts could potentially reduce costs associated with this uncertainty."
However, he says that potential is eroded if the forecasts do not vary from the underlying risk of a dry season. The bureau has defined a "dry" season as rainfall in the lowest third of seasons. By this definition, over the long term, 33 out of 100 seasons are expected to be dry.
Associate Professor Vizard says "skill" is the ability to provide accurate information beyond the underlying risk. "A forecasting system that had maximum skill would provide a forecast of either 100 percent chance of a dry season or zero percent chance of a dry season, and would always be correct. It is relatively easy to show that such a forecasting system would be of very high value to growers and other end users," he says.
"At the other extreme, a forecasting system that had absolutely no skill would continuously forecast one number - the underlying risk, in this case by definition a 33 percent chance of a dry season. It is easy to show that such a forecasting system would be useless to all end-users. For example, imagine a system that forecast the probability of rolling a six on a dice. If the forecasting system continually kept on predicting a one in six chance of rolling a six, it would be of no value to end users," Professor Vizard says.
The research showed that the forecasting system has been close to the "no-skill" case.
Mr Anderson says the majority of forecasts have been clustered around the underlying risk of 33 percent. He estimates that the current system captures only about two percent of the skill of a perfect forecasting system. He says the main issue is that the forecasts hardly deviate from the underlying risk of 33 percent. Half the forecasts have been between 30 percent and 38 percent, and among the 5731 forecasts ever issued, there was no forecast greater than a 66 percent probability of a dry season.
Professor Vizard asks: "How can a grower extract value from a system that continually forecasts at or near the underlying risk of 33 percent?"
"We examined this in detail, using a rigorous and proven approach, and demonstrated that, at best, very small benefits could accrue for a narrow class of end-use. In this analysis we assumed that the bureau"s forecasts were unbiased even though our verification study suggested that a bias existed."
"We do not doubt that the Bureau"s forecasting system is one of the most sophisticated in the world; our research shows that even world"s best practice is currently not sufficient to add value to end users."
The researchers also determined the benefits that could be realised from any potential increase in skill of seasonal forecasting. This showed that a major improvement in skill is required before seasonal rainfall forecasts can meaningfully add value for most end users.
"These findings have important long-term implications that are quite independent from any argument about current skill levels in seasonal forecasts or any method used to derive a forecast," Professor Vizard says.
"We showed that any seasonal forecasting system that only captures about two percent of the skill of a perfect forecast cannot deliver value to end users. Further, any seasonal forecasting system that delivers say, 10 percent of the perfect forecast, would still offer only slight benefits to growers."
"Substantial benefits for growers and other end users only kick in when a seasonal forecasting system captures about 35 percent of the skill of a perfect system. This represents a very large gap in skill level between what has been delivered and what is needed. When a forecasting system has the desired skill, more than a quarter of forecasts are below a 10 percent chance of a dry season and more than a quarter above a 50 percent chance of a dry season. We have been well off this situation," Professor Vizard says.
Mr Anderson says it is relatively easy to monitor changes in skill of the seasonal forecasts. "If the forecasts stay clustered around 33 percent, the skill remains low. Accurate forecasts that are closer to zero percent or 100 percent chance of a dry season are required in order to deliver real value. This is something that a farmer can check on," Mr Anderson says.
"For instance, in the year 2000 there was a major change in the seasonal rainfall forecasting system, but monitoring of forecasts since that time show that forecasts are still clustered around 33 percent."
The researchers concluded that their study demonstrated that to add value to most decisions that end users face, a substantial increase in skill of seasonal rainfall forecasts is required, while the bureau"s Mr Plummer says that "with their research still to be published and peer-reviewed, and with counter views that seasonal forecasts are valuable to agriculture, it is fair to say that the jury is still out on this issue".
The University of Melbourne team related the following grower scenario to highlight what they say is the gap between the skill of seasonal rainfall forecasts and grower requirements.
Farmer Smith is a wheat grower. It is August and it has been a dry winter. He does some calculations and figures out that if it is a normal spring he will be about $20,000 better off if he fertilises his crop with nitrogen and achieves a yield increase.
However, if it is a dry spring, he will be $20,000 worse off if he fertilises with nitrogen.
Can he use seasonal forecasts to help make the decision? He is waiting to find out the probability of a dry spring. It can be shown that if Farmer Smith is interested in maximising his expected economic benefit, he would apply the nitrogen - unless the probability of a dry spring is greater than 50 percent.
Imagine 100 seasons where Farmer Smith uses this strategy. Thirty-three of these seasons are expected to be dry, with the remainder normal.
If Farmer Smith had access to a perfect seasonal rainfall forecasting system, he would have applied nitrogen in 67 normal springs, and not applied nitrogen in all 33 expected dry springs. The average yearly benefit of the perfect forecasting system to him would be $6600.
However, the current forecasting system is not perfect. Most of the forecasts are clustered around 33 percent. In fact, of 100 seasonal forecasts, 98 will be less than a 50 percent chance of a dry season. Farmer Smith will therefore apply nitrogen in 98 of the 100 seasons.
But in reality, 32 of these 98 seasons will be dry so Mr Smith will have inappropriately fertilised. Further, of the two seasons that Mr Smith does not apply fertiliser one is expected to be dry, and the other normal, in which nitrogen was inappropriately withheld.
The average yearly benefit of the current seasonal forecasting system to Mr Smith is therefore zero.
For more information:
Professor Andrew Vizard, 03 9731 2225, firstname.lastname@example.org