Quantifying the unquantifiable: floods

Keith beven

We had the first joint meeting of the RACER and CREDIBLE consortia in the area of floods at Imperial College last week.   Saying something about the frequency and magnitude of floods is difficult, especially when it is a requirement to make estimates for low probability events (Annual Exceedance Probabilities of 0.01 and 0.001 for flood plain planning purposes in the UK, for example) without very long records being available.  Even for short term flood forecasting there can be large uncertainties associated with observed rainfall inputs, even more associated with rainfalls forecast into the future, a strongly uncertain nonlinear  relationship between the antecedent conditions in a catchment and how much of that rainfall becomes streamflow, and uncertainty in the routing of that streamflow down to areas at risk of flooding.  In both flood frequency and flood forecasting applications, there is also the issue of how flood magnitudes might change into the future due to both land management and climate changes.

Nearly all of these sources of uncertainty have both aleatory and epistemic components.    As a hydrologist, I am not short of hydrological models – there are far too many in the literature and there are also modelling systems that provide facilities for choosing and combining different types of model components in representing the fast and slow elements of catchment response (e.g. PRMS, FUSE, SuperFLEX).   I am, however, short of good ways of representing the uncertainties in the inputs and observed variables with which model outputs might be compared.

Why is this?   It is because some of the errors and uncertainties are essentially unquantifiable epistemic errors.  In rainfall-runoff modelling we have inputs that are not free of epistemic error being processed through a nonlinear, non-error free model and compared with output observations that are not free from epistemic error.  Some of the available data may actually be disinformative in the sense of being physically inconsistent (e.g. more runoff in the stream than observed rainfall, Beven et al., 2011), but in rather arbitrary ways from event to event.  Patterns of intense rainfall in flood events are often poorly known, even if radar data are available.  Flood discharges that are the variable of interest here are generally not observed but are rather constructed variables of unknown uncertainty (e.g. Beven et al., 2012).  In this situation it seems to me that treating a residual model as simply additive and stationary is asking for trouble in prediction (where the epistemic errors might be quite different).   There may well be an underlying stochastic process in the long term, but I do not think that is a particularly useful concept in the short term when there appears to be changing residual characteristics both within and between events, particularly for those disinformative events.   Most recently, we have been making estimates of two sets of uncertainty bounds in prediction – one treating the next event as if it might be part of the set of informative events, and one as if it might be part of the set of disinformative events (Beven and Smith, 2013).  A priori, of course, we do not know.

This is not such an issue in flood forecasting.   In that case we are only interested in minimizing bias and uncertainty for a limited lead time into the future and in many cases, when the response time of a catchment is as long as the lead time required, we can make use of data assimilation to correct for disinformation in the inputs and other epistemic sources of error.   Even a simple adaptive gain on the model predictions can produce significant benefits in forecasting.  It is more of an issue in the flash flood case, where it may be necessary to predict the rainfall inputs ahead of time to get forecasts with an adequate lead time (e.g. Alfieri et al., 2011; Smith et al., 2013).   Available methods for predicting rainfalls involve large uncertainties in both locations of heavy rainfall and intensities.   This might improve as methods of combining radar data with high resolution atmospheric models evolve, but will also require improved surface parameterisations in getting triggering mechanisms right.

The real underlying issue here comes back to whether epistemic errors can be treated as statistically stationary or whether, given the short periods of data that are often available, they result in model residual characteristics that are non-stationary.  I think that if they are treated as stationary (particularly within a Gaussian information measure framework) it leads to gross overconfidence in inference (e.g. Beven, 2012; Beven and Smith, 2013).   That is one reason why I have for a long time explored the use of more relaxed likelihood measures and limits of acceptability within the GLUE methodology (e.g. Beven, 2009).   However, that does not change the fact that the only guide we have to future errors is what we have already seen in the calibration/conditioning data.   There is still the possibility of surprise in future predictions.   One of the challenges of the unquantifiable uncertainty issue in CREDIBLE/RACER research is to find ways of protecting against future surprise.

References

Alfieri L., Smith P.J., Thielen-del Pozo J., and Beven K.J., 2011, A staggered approach to flash flood forecasting – case study in the Cevennes Region, Adv. Geosci. 29, 13-20.

Beven, K., Smith, P. J., and Wood, A., 2011, On the colour and spin of epistemic error (and what we might do about it), Hydrol. Earth Syst. Sci., 15, 3123-3133, doi: 10.5194/hess-15-3123-2011.

Beven, K. J., and Smith, P. J., 2013, Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models, ASCE J. Hydrol. Eng., in press.

Beven, K J, Buytaert, W and Smith, L. A., 2012, On virtual observatories and modeled realities (or why discharge must be treated as a virtual variable), Hydrological Processes, DOI: 10.1002/hyp.9261

Beven, K J, 2012So how much of your error is epistemic? Lessons from Japan and Italy. Hydrological Processes,DOI: 10.1002/hyp.9648,  in press.

Smith, P J, L. Panziera  and K. J. Beven, 2013, Forecasting flash floods using Data Based Mechanistic models and NORA radar rainfall forecasts, Hydrological Sciences Journal, in press

PlumeRise – modelling the interaction of volcanic plumes and meteorology

Dr. Mark Woodhouse (Postdoctoral Research Assistant, School of Mathematics, University of Bristol)

Explosive volcanic eruptions, such as eruptions of Eyjafjallajökull 2010, Grimsvötn 2011 and Puyehue Cordón-Caulle 2011, inject huge quantities of ash high into the atmosphere that can be spread over large distances.  The 2010 eruption of Eyjafjallajökull, Iceland, demonstrated the vulnerability of European and transatlantic airspace to volcanic ash in the atmosphere.  Airspace management during eruptions relies on forecasting the spreading of ash.

A crucial requirement for forecasting ash dispersal is the rate at which material is delivered from the volcano to the atmosphere, a quantity known as the source mass flux.  It is currently not possible to measure the source mass flux directly, so an estimate is made by exploiting a relationship between the source mass flux and the height of the plume which is obtained from the fundamental dynamics of buoyant plumes and calibrated using a dataset of historical eruptions.  However, meteorology is not included in the calibrated scaling relationship.  Our recent study, published in the Journal of Geophysical Research, shows that meteorology, in particular wind conditions, at the time of the eruption has large effect on the rise of the plume.  Neglecting the wind can lead to under predictions of the source mass flux by more than a factor of 10.

Our model, PlumeRise, allows detailed meteorological data to be included in the calculation of the plume dynamics.  By applying PlumeRise to the record of plume height observations during the Eyjafjallajökull eruption we reconstruct the behavior of the volcano during April 2010, when the disruption to air traffic was greatest.  Our results show the source mass flux at Eyjafjallajökull was up to 30 times higher than estimated using the calibrated scaling relationship.

Underestimates of the source mass flux by such a large amount could lead to unreliable forecasts of ash distribution.  This could have extremely serious consequences for the ash hazard to aviation.  In order to allow our model to be used during future eruptions, we have developed the PlumeRise web-tool.

PlumeRise (www.plumerise.bris.ac.uk) is a free-to-use tool that performs calculations using our model of volcanic plumes.  Users can input meteorological observations, or use idealized atmospheric profiles.  Volcanic source conditions can be specified and the resulting plume height determined, or an inversion calculation can be performed where the source conditions are varied to match the plume height to an observation.  Multiple runs can be performed to allow comparison of different parameter sets.

The PlumeRise model was developed at the University of Bristol by Mark Woodhouse, Andrew Hogg, Jeremy Phillips and Steve Sparks.  The PlumeRise web-tool was developed by Chris Johnson (University of Bristol).  The tool has been tested by several of the Volcanic Ash Advisory Centres (VAACs).  It is also being used by academic institutions around the world.  Our research is part of the VANAHEIM project.  The development of the PlumeRise web-tool was supported by the University of Bristol’s Enterprise and Development Fund.