Uncertainty in Weather Forecasting

Ken MyLne, Met Office

Uncertainty is an inherent part of weather forecasting – the very word forecast, coined by Admiral Fitzroy who founded the Met Office over 150 years ago, differentiates it from a prediction by implication of uncertainty. Chaos theory – the idea of sensitivity of the forecast to small errors in initial conditions within non-linear systems – originated in early experiments in numerical modelling in meteorology.

Modern weather forecasting, using highly complex and non-linear models of atmospheric circulation and physics, uses a technique called ensemble forecasting to explicitly address uncertainty in chaotic systems. Ensemble forecasting is similar in concept to Monte-Carlo modelling, but with many fewer model runs due to the high computational cost of forecast models. Small perturbations are added to the analysis of the current state of the atmosphere, and multiple forecasts are run forward from the perturbed initial states. Small perturbations to aspects of model physics are often also employed to address uncertainty due to approximations in model representations of physics and unresolved sub-grid scale motions. The ensemble then provides a set of typically between 20 and 50 separate forecast solutions, each a self-consistent forecast scenario. The set may be used to estimate the likelihood of different forecast outcomes and to assess risks associated with different scenarios, particularly with potentially high-impact weather – which is where PURE comes in.

Ensemble forecasting has traditionally been used for medium-range forecasting of several days ahead, where the synoptic scale evolution of large scale pressure systems is highly chaotic, as well as for longer range prediction. However the recent advent of models with grid-lengths of 1-4km which can partially resolve the highly non-linear circulations within convective storms such as thunderstorms means that ensembles are now an essential feature of the forecast within the next 12-24 hours. Figure 1 shows an example of a forecast from a 2.2km convection permitting ensemble of rain falling within a particular hour exceeding 0.2mm. Here a post-processing method is also used to account for the limited sampling of the ensemble and smooth the probabilities by allowing also for the possibility that the rain seen at one grid-point in the model could equally likely be falling at adjacent grid-points within around 30km.

Figure 1: Example of forecast of heavy rainfall (exceeding 16mm/hr) from the Met Office’s 2.2km MOGREPS-UK ensemble post-processed with “neighbourhood” processing to account for under-sampling by the ensemble.

Ensemble forecasts provide a powerful tool for estimating the likelihood of extreme weather events, but to properly understand the risk requires this to be translated into the probability of a significant impact on vulnerable assets. This is a significant challenge because each vulnerable asset has its own impact mechanism which must be modelled. Furthermore the impact is more difficult to measure objectively and therefore the forecasts are difficult to validate or calibrate in any objective manner. This work to develop a Hazard Impact Model (HIM) is being attempted collaboratively within the Natural Hazards Partnership, a collaboration of several UK agencies supported by the Cabinet Office, with the aim of providing guidance to the government on societal risk from natural hazards. Two of the NERC PURE PhD studentships sponsored by the Met Office aim to contribute to the HIM

:•    At Exeter University within the CREDIBLE consortium, looking at impacts of severe windstorms, and

•    At Reading University within the RACER consortium addressing impacts of cold temperature outbreaks.

One of the biggest challenges is the communication of uncertainty to the users of weather forecasts. Traditionally forecasters have always expressed uncertainties with subjective phrases such as “rain at times, heavy in places in the north, perhaps leading to localised flooding”. With increasingly automated forecasts for very specific locations there is a need to express uncertainty through symbols or graphics and in ways that people understand, and believe that they understand. A number of experiments over recent years have demonstrated that people from a range of social and educational backgrounds can make better decisions when presented with more complex probabilistic forecast information than if they are given a simple deterministic (or categorical) forecast. Despite this, the message that frequently comes back from customers and service managers is “people don’t understand probabilities” and “users just need to make a decision”, so users appear not to believe that they can understand the uncertainty information or know how to respond to it. As scientists we may be able to demonstrate that providing uncertainty represents a more complete picture of the forecast, and allows for the possibility of better decision-making (assuming of course that the uncertainty information has skill), but convincing users of this, from government civil protection agencies and commercial business users to the general public, remains a significant challenge. We hope that the PURE project will make a significant contribution to breaking down these barriers and finding improved ways of communicating the complete forecast picture.


On the day of posting, we have just received news of the terrible tornado near Oklahoma city. Tornadoes are a great example of a low-probability high-impact event, especially when you forecast the probability of an individual property or person being affected. But in general the people of Oklahoma know well how to respond to watches (issued hours ahead with low probabilities over areas at risk) and warnings (issued minutes ahead with higher probability). Sadly on this occasion the storm was so severe that their normal protection did not protect everyone. We are currently experimenting, in collaboration with the US National Severe Storms Laboratory, with new ensemble tools and high resolution models for forecasting tornado risk. Very early indications are encouraging.

Lahars – Floods of Volcanic Mud

jeremy phillips

In this installment, I will introduce some of the hazards and research questions concerning the strange-sounding Lahar, or volcanic mudflow, if your Indonesian is a little rusty. Firstly some things about me: I am an academic Volcanologist with interests in developing models of volcanic processes that can be used to assess their hazard in practical situations, so I have broad interests in uncertainty in hazard modeling. Secondly, this is my first blog post ever (there’s probably some term for this), so here goes…

Lahars are a mixture of volcanic ash, other rocks and soils, and water, which have a consistency similar to wet concrete. They are formed in two main ways: when a volcano erupts hot ash onto its snow covered or glacicated flanks and surroundings, or when intense rainfall remobilizes volcanic ash deposits from previous eruptions or ongoing activity. The resulting mixture of ash and water tends to flow in existing valleys, that are typically steep-sided higher on the volcano and feed into flatter river systems. Like other hazardous mass flows, they are erosive on steep slopes (this is where the ‘other rocks and soils’ come from) and form deposits on shallow slopes, but in lahars the addition of rock and soils by erosion is extreme – the volume of the Lahar can increase by up to a factor of ten due to this effect. The flow transports a high concentration of solids, providing the ability for large boulders to be transported in the flow, meaning that building damage can be considerable, when the flow overtops river channels in habited regions. They can also be extremely fast moving – have a look at http://www.youtube.com/watch?v=kznwnpNTB6k for three amazing clips of recent lahars in Japan.

Despite their initiation in localized regions, Lahars can be very large and destructive. A key driver in the development of volcanic hazards communication videos in the 1980s was the Lahar resulting from the eruption of Nevada Del Ruiz in Colombia in 1985, which destroyed the town of Armero 74 km away with the estimated loss of 23,500 lives. Lahar activity is common in countries with high snow-covered volcanoes such as Japan, those that include the Andes, countries with active volcanoes and seasonal rainfall including Indonesia and the central Americas, or simply where there has been a large recent explosive eruption, such as areas of the Western US affected by the 1980 Mt St Helens eruption. Lahars are closely related to mudflows and debris flows, and these hazards are globally widely-distributed.

Update on ‘earthquake hazard/risk’ activities in CREDIBLE

katsu goda

On 20th April, 2013, another dreadful earthquake disaster occurred in Sichuan, China. The earthquake was a moment magnitude (Mw) 6.6 event, originated from the Longmenshan fault (http://earthquake.usgs.gov/earthquakes/eventpage/usb000gcdd#summary). As of April 27, the number of fatalities (including those missing) was 217. Besides, more than 13,000 were injured and more than 2 million were affected by the earthquake. The earthquake source region was the same as the catastrophic 12th May 2008 Mw7.9 Wenchuan earthquake (total number of deaths exceeded 69,000).

From an earthquake risk management perspective, this tragic event reminds us of two important facts. Firstly, earthquake damage is the consequence from implemented seismic protection and experienced seismic hazard. In simple terms, damage is caused when seismic demand exceeds seismic capacity of a structure. Plenty of reports and news have indicated that seismic design provisions for buildings and infrastructure and their implementation in the Sichuan province were not sufficient even after the 2008 earthquake. Newly constructed buildings were severely damaged or collapsed during the recent event. As adequate techniques and devices to mitigate seismic damage of physical structures (e.g. ductile connections, dampers, isolation, etc.) are available, the damage and loss could have been reduced significantly. It is also important to recognise that recovery and reconstruction from disasters provide us with opportunities to fix the problems fundamentally – we should take advantages of those events in a proactive manner.

Secondly, major earthquakes occur repeatedly in the same seismic region, and triggered events can cause significant damage. The 2013 event may not be directly triggered by the 2008 event. Nonetheless, it is suspected that the previous event have had some influence on the occurrence of the recent one through complex stress interaction within the Longmenshan fault system (this is possible in light of the triggering of the 1999 Mw7.1 Hector Mine earthquake due to the 1992 Mw7.3 Landers earthquake in California; Felzer et al., 2002). The destructiveness of repeated earthquakes has also been highlighted in recent earthquake disasters, particularly in the 2010-2011 Canterbury sequences (Shcherbakov et al., 2012). The mainshock was the 4thSeptember 2010 Mw7.1 Darfield earthquake, occurred about 40 km from the city centre of Christchurch. Following this event, aftershocks continued and migrated toward the city. On 22nd February, 2011, a major Mw6.3 Christchurch earthquake struck very near to the downtown Christchurch at a shallow depth, causing severe damage to buildings and infrastructure.

The current probabilistic seismic hazard and risk analysis framework does not address aftershock impact in the assessment. It is mainly focused upon the effects due to mainshocks, which are modelled by stationary Poisson processes (or some renewal processes). Clearly, in a post-mainshock situation, aftershock generation processes are no longer time-invariant. As part of PURE-CREDIBLE projects, I have been working on this issue by analysing world-wide aftershock sequences to develop simple statistical models, such as the Gutenberg-Richter law, modified Omori law, and Bath’s law (Shcherbakov et al., 2013). The main aim is to evaluate the fitness of the three seismological models to actual mainshock-aftershocks data and the uncertainty (variability) of the model parameters. These models facilitate the quantification of aftershock hazards, following a major mainshock, and the simulation of artificial mainshock-aftershocks sequences. Building upon this, I have further investigated how to assess the nonlinear seismic damage potential on structures due to aftershocks, in addition to mainshocks (Goda and Taylor, 2012; Goda and Salami, 2013). Extensive nonlinear dynamic simulations of structural behaviour subjected to real mainshock-aftershocks sequences were conducted to establish empirical benchmark for the aftershock effects on structures. Moreover, an artificial procedure for generating mainshock-aftershocks sequences was devised, and the equivalence of the nonlinear damage potential due to real and artificial sequences was verified. With the newly developed seismological and engineering tools, one can assess the aftershock hazard and risk quantitatively; these tools can be used as a starting model to evaluate the aftershock impact in low-to-moderate seismic regions where real mainshock-aftershocks sequences are not available. Importantly, the extended seismic hazard and risk analysis methodology takes into account triggered/induced hazards more comprehensively (in this regard, many improvements should be made by including geotechnical hazards and tsunamis), and envisages a dynamic, rather than quasi-static, risk assessment framework. It is noteworthy that major challenges in characterising aftershock generation remain. The above developments are focused upon temporal features of aftershock occurrence (e.g. decay of seismic activities since a mainshock), while spatial dependence of aftershocks is not explicitly considered. A workable technique is an ETAS (Epidemic Type Aftershock Sequence) model (Ogata and Zhang, 2006); it allows ‘individual aftershocks generate their own aftershock sequences’ and ‘aftershock occurrence rates vary in space’. Prof. Ian Main in the PURE-RACER consortium is actively working on this issue.

Another notable development/news in the field of engineering seismology and earthquake engineering is the (near) completion of a so-called NGA-WEST2 project (see http://peer.berkeley.edu/ngawest2/ for general information) on ground motion prediction models for global shallow crustal earthquakes. Various research outcomes from the NGA-WEST2 project were presented during the 2013 Seismological Society of America annual meeting at Salt Lake City. A new set of ground motion models will replace the 2008 models eventually. The new suite improves modelling of directivity and directionality (e.g. near-fault motions), extension of the applicable range to a lower magnitude range (i.e. magnitude scaling), model development for vertical motions, quantification of epistemic uncertainty, and evaluation of soil amplification factors (including non-linear site responses). All these refinements are important and relevant for seismic hazard assessment in Europe and the U.K. Particularly, more robust magnitude scaling of the predicted ground motion will enhance the accuracy of the seismic hazard assessment (note: an on-going NGA-EAST project will have greater influence on the seismic hazard and risk assessment in the U.K.).

Lastly, I would like to draw attention to a publication of ‘Handbook of seismic risk analysis and management of civil infrastructure systems’ (Woodhead Publishing, Cambridge, U.K.;http://www.woodheadpublishing.com/en/book.aspx?bookID=2497), which I have co-edited. The contributors of the handbook are international (U.K., Italy, Germany, Greece, Turkey, Canada, U.S.A., Japan, Hong Kong, New Zealand, and Colombia), having diverse professional backgrounds and interests. The handbook covers a wide range of topics related to earthquake hazard assessment, seismic risk analysis, and risk management for civil infrastructure. One of the main focuses is to provide the state-of-the-art overview of seismic risk analysis and uncertainty quantification. I hope that this is an invaluable guide for professionals requiring understanding of the impact of earthquakes on buildings and lifelines, and the seismic risk assessment and management of buildings, bridges and transportation.


Felzer, K.R., Becker, T.W., Abercrombie, R.E., Ekstrom, G., Rice, J.R. (2002). Triggering of the 1999MW 7.1 Hector Mine earthquake by aftershocks of the 1992 MW 7.3 Landers earthquake. Geophys. Res.: Solid Earth, 107, ESE 6-1-ESE 6-13.

Goda, K., Taylor, C.A. (2012). Effects of aftershocks on peak ductility demand due to strong ground motion records from shallow crustal earthquakes. Earthquake Eng. Struct. Dyn., 41, 2311-2330.

Goda, K., Salami, M.R. (2013). Cloud and incremental dynamic analyses for inelastic seismic demand estimation subjected to mainshock-aftershock sequences. Bull. Earthquake Eng. (in review).

Ogata, Y., Zhuang, J. (2006). Space–time ETAS models and an improved extension. Tectonophysics, 413, 13-23.

Shcherbakov, R., Nguyen, M., Quigley, M. (2012). Statistical analysis of the 2010 Mw 7.1 Christchurch, New Zealand, earthquake aftershock sequence. New Zealand J. Geol. Geophys., 55, 305-311.

Shcherbakov, R., Goda, K., Ivanian, A., Atkinson, G.M. (2013). Aftershock statistics of major subduction earthquakes. Bull. Seismol. Soc. Am. (in review).