Quantifying the unquantifiable: floods

Keith beven

We had the first joint meeting of the RACER and CREDIBLE consortia in the area of floods at Imperial College last week.   Saying something about the frequency and magnitude of floods is difficult, especially when it is a requirement to make estimates for low probability events (Annual Exceedance Probabilities of 0.01 and 0.001 for flood plain planning purposes in the UK, for example) without very long records being available.  Even for short term flood forecasting there can be large uncertainties associated with observed rainfall inputs, even more associated with rainfalls forecast into the future, a strongly uncertain nonlinear  relationship between the antecedent conditions in a catchment and how much of that rainfall becomes streamflow, and uncertainty in the routing of that streamflow down to areas at risk of flooding.  In both flood frequency and flood forecasting applications, there is also the issue of how flood magnitudes might change into the future due to both land management and climate changes.

Nearly all of these sources of uncertainty have both aleatory and epistemic components.    As a hydrologist, I am not short of hydrological models – there are far too many in the literature and there are also modelling systems that provide facilities for choosing and combining different types of model components in representing the fast and slow elements of catchment response (e.g. PRMS, FUSE, SuperFLEX).   I am, however, short of good ways of representing the uncertainties in the inputs and observed variables with which model outputs might be compared.

Why is this?   It is because some of the errors and uncertainties are essentially unquantifiable epistemic errors.  In rainfall-runoff modelling we have inputs that are not free of epistemic error being processed through a nonlinear, non-error free model and compared with output observations that are not free from epistemic error.  Some of the available data may actually be disinformative in the sense of being physically inconsistent (e.g. more runoff in the stream than observed rainfall, Beven et al., 2011), but in rather arbitrary ways from event to event.  Patterns of intense rainfall in flood events are often poorly known, even if radar data are available.  Flood discharges that are the variable of interest here are generally not observed but are rather constructed variables of unknown uncertainty (e.g. Beven et al., 2012).  In this situation it seems to me that treating a residual model as simply additive and stationary is asking for trouble in prediction (where the epistemic errors might be quite different).   There may well be an underlying stochastic process in the long term, but I do not think that is a particularly useful concept in the short term when there appears to be changing residual characteristics both within and between events, particularly for those disinformative events.   Most recently, we have been making estimates of two sets of uncertainty bounds in prediction – one treating the next event as if it might be part of the set of informative events, and one as if it might be part of the set of disinformative events (Beven and Smith, 2013).  A priori, of course, we do not know.

This is not such an issue in flood forecasting.   In that case we are only interested in minimizing bias and uncertainty for a limited lead time into the future and in many cases, when the response time of a catchment is as long as the lead time required, we can make use of data assimilation to correct for disinformation in the inputs and other epistemic sources of error.   Even a simple adaptive gain on the model predictions can produce significant benefits in forecasting.  It is more of an issue in the flash flood case, where it may be necessary to predict the rainfall inputs ahead of time to get forecasts with an adequate lead time (e.g. Alfieri et al., 2011; Smith et al., 2013).   Available methods for predicting rainfalls involve large uncertainties in both locations of heavy rainfall and intensities.   This might improve as methods of combining radar data with high resolution atmospheric models evolve, but will also require improved surface parameterisations in getting triggering mechanisms right.

The real underlying issue here comes back to whether epistemic errors can be treated as statistically stationary or whether, given the short periods of data that are often available, they result in model residual characteristics that are non-stationary.  I think that if they are treated as stationary (particularly within a Gaussian information measure framework) it leads to gross overconfidence in inference (e.g. Beven, 2012; Beven and Smith, 2013).   That is one reason why I have for a long time explored the use of more relaxed likelihood measures and limits of acceptability within the GLUE methodology (e.g. Beven, 2009).   However, that does not change the fact that the only guide we have to future errors is what we have already seen in the calibration/conditioning data.   There is still the possibility of surprise in future predictions.   One of the challenges of the unquantifiable uncertainty issue in CREDIBLE/RACER research is to find ways of protecting against future surprise.

References

Alfieri L., Smith P.J., Thielen-del Pozo J., and Beven K.J., 2011, A staggered approach to flash flood forecasting – case study in the Cevennes Region, Adv. Geosci. 29, 13-20.

Beven, K., Smith, P. J., and Wood, A., 2011, On the colour and spin of epistemic error (and what we might do about it), Hydrol. Earth Syst. Sci., 15, 3123-3133, doi: 10.5194/hess-15-3123-2011.

Beven, K. J., and Smith, P. J., 2013, Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models, ASCE J. Hydrol. Eng., in press.

Beven, K J, Buytaert, W and Smith, L. A., 2012, On virtual observatories and modeled realities (or why discharge must be treated as a virtual variable), Hydrological Processes, DOI: 10.1002/hyp.9261

Beven, K J, 2012So how much of your error is epistemic? Lessons from Japan and Italy. Hydrological Processes,DOI: 10.1002/hyp.9648,  in press.

Smith, P J, L. Panziera  and K. J. Beven, 2013, Forecasting flash floods using Data Based Mechanistic models and NORA radar rainfall forecasts, Hydrological Sciences Journal, in press