Ken MyLne, Met Office
Uncertainty is an inherent part of weather forecasting – the very word forecast, coined by Admiral Fitzroy who founded the Met Office over 150 years ago, differentiates it from a prediction by implication of uncertainty. Chaos theory – the idea of sensitivity of the forecast to small errors in initial conditions within non-linear systems – originated in early experiments in numerical modelling in meteorology.
Modern weather forecasting, using highly complex and non-linear models of atmospheric circulation and physics, uses a technique called ensemble forecasting to explicitly address uncertainty in chaotic systems. Ensemble forecasting is similar in concept to Monte-Carlo modelling, but with many fewer model runs due to the high computational cost of forecast models. Small perturbations are added to the analysis of the current state of the atmosphere, and multiple forecasts are run forward from the perturbed initial states. Small perturbations to aspects of model physics are often also employed to address uncertainty due to approximations in model representations of physics and unresolved sub-grid scale motions. The ensemble then provides a set of typically between 20 and 50 separate forecast solutions, each a self-consistent forecast scenario. The set may be used to estimate the likelihood of different forecast outcomes and to assess risks associated with different scenarios, particularly with potentially high-impact weather – which is where PURE comes in.
Ensemble forecasting has traditionally been used for medium-range forecasting of several days ahead, where the synoptic scale evolution of large scale pressure systems is highly chaotic, as well as for longer range prediction. However the recent advent of models with grid-lengths of 1-4km which can partially resolve the highly non-linear circulations within convective storms such as thunderstorms means that ensembles are now an essential feature of the forecast within the next 12-24 hours. Figure 1 shows an example of a forecast from a 2.2km convection permitting ensemble of rain falling within a particular hour exceeding 0.2mm. Here a post-processing method is also used to account for the limited sampling of the ensemble and smooth the probabilities by allowing also for the possibility that the rain seen at one grid-point in the model could equally likely be falling at adjacent grid-points within around 30km.
Figure 1: Example of forecast of heavy rainfall (exceeding 16mm/hr) from the Met Office’s 2.2km MOGREPS-UK ensemble post-processed with “neighbourhood” processing to account for under-sampling by the ensemble.
Ensemble forecasts provide a powerful tool for estimating the likelihood of extreme weather events, but to properly understand the risk requires this to be translated into the probability of a significant impact on vulnerable assets. This is a significant challenge because each vulnerable asset has its own impact mechanism which must be modelled. Furthermore the impact is more difficult to measure objectively and therefore the forecasts are difficult to validate or calibrate in any objective manner. This work to develop a Hazard Impact Model (HIM) is being attempted collaboratively within the Natural Hazards Partnership, a collaboration of several UK agencies supported by the Cabinet Office, with the aim of providing guidance to the government on societal risk from natural hazards. Two of the NERC PURE PhD studentships sponsored by the Met Office aim to contribute to the HIM
:• At Exeter University within the CREDIBLE consortium, looking at impacts of severe windstorms, and
• At Reading University within the RACER consortium addressing impacts of cold temperature outbreaks.
One of the biggest challenges is the communication of uncertainty to the users of weather forecasts. Traditionally forecasters have always expressed uncertainties with subjective phrases such as “rain at times, heavy in places in the north, perhaps leading to localised flooding”. With increasingly automated forecasts for very specific locations there is a need to express uncertainty through symbols or graphics and in ways that people understand, and believe that they understand. A number of experiments over recent years have demonstrated that people from a range of social and educational backgrounds can make better decisions when presented with more complex probabilistic forecast information than if they are given a simple deterministic (or categorical) forecast. Despite this, the message that frequently comes back from customers and service managers is “people don’t understand probabilities” and “users just need to make a decision”, so users appear not to believe that they can understand the uncertainty information or know how to respond to it. As scientists we may be able to demonstrate that providing uncertainty represents a more complete picture of the forecast, and allows for the possibility of better decision-making (assuming of course that the uncertainty information has skill), but convincing users of this, from government civil protection agencies and commercial business users to the general public, remains a significant challenge. We hope that the PURE project will make a significant contribution to breaking down these barriers and finding improved ways of communicating the complete forecast picture.
On the day of posting, we have just received news of the terrible tornado near Oklahoma city. Tornadoes are a great example of a low-probability high-impact event, especially when you forecast the probability of an individual property or person being affected. But in general the people of Oklahoma know well how to respond to watches (issued hours ahead with low probabilities over areas at risk) and warnings (issued minutes ahead with higher probability). Sadly on this occasion the storm was so severe that their normal protection did not protect everyone. We are currently experimenting, in collaboration with the US National Severe Storms Laboratory, with new ensemble tools and high resolution models for forecasting tornado risk. Very early indications are encouraging.