Flood Defence Performance
We're very pleased to have been able, along with JBA, to support MSc student Janie Haven on the Water and Environmental Management course at the University of Bristol - she completed her dissertation A Comparison of Actual Fluvial Embankment Flood Defence Performance to RASP Estimated Performance in 2013. Results of here work are summarised here.
The findings were that current models of defence performance, expressed as fragility curves describing the probability of failure as a function of loading, may significantly over estimate the occurence of breaching in river defences in England - by as much as an order of magnitude. This will have consequences for how we map flood risk and manage defence maintenance programmes.
The Environment Agency are currently updating their defence performance models, and we're hoping to be able to do a similar analysis for coastal defences in the near future.
Flood Risk Mapping with S-grid - Faster, Cheaper, and More Robust
Modelling flood flows can be a CPU intensive process - representing floodplains at the scales required to capture detailed processes means we're always going to be doing lots of calculations. Trying to map floods at national scale and beyond means we need a lot of computing power - multiple CPUs, GPUs or even supercomputers. But modelling floods doesn't need to be so computationally intensive - we've developed the S-grid model to allow national scale flood risk mapping, rapidly on a standard desktop PC or laptop.
S-grid uses a sub-grid approach to representing storage and flow between large cells (typically 1km squares) - like a grid of storage cells linked by 1D cross sections. This allows the model to model large areas very quickly - typically a model of the UK for a 24 hour storm will run in less than 2 hours, while still representing the important hydraulic characteristics of the floodplain. The model is coded in a mix of Python and C++, meaning it's quick to set up (10 minutes or so to build a new model), and easy to introduce new features through the Python code.While working at a 1km grid misses out lots of detail, there is actually an advantage. When we're working with poor quality topographic data (from satellites for example), any small scale detail is osbcured by noise - and modelling at these scales is just going to be dominated by noise. We can get a better picture of what's going on by zooming out to a bigger scale, where we start to see the true structure of the floodplain rather than just noise. Working at 1km resolution is actually an advantage when we're dealing with low quality data sets.
This can be especially important when using a direct rainfall model, where we try to mimic the catchment hydrology by adding water to the model as rainfall and letting it find its own way through the catchment. Trying to do this on a noisy DTM is pointless - the water just fills up the artificial "puddles" caused by the noise in the data.
In a 1km square model the water falling on a square automatically accumulates at its lowest point, where there is a good chance it can flow into the next cell and make its way along the catchment. This means it's possible to run direct rainfall models on noisy DTMs, without any DTM preprocessing to fill depressions, and S-grid will still represent the catchment behaviour.
The example below shows the flood depths predicted by S-grid using two topographic data sets - Ordnance Survey Terrain 50 and SRTM. The SRTM topography has a vertical error of several metres, and even though we've reduced this by applying a 3x3 moving average filter, conventional wisdom would have us believe that this is not good enough for flood modelling on a (comparatively) small catchment like this.
But S-grid does well when compared with the OS Terrain data (which is less noisy), and certainly the same overall flood extent and pattern of flooding. The predictions may not be as accurate as they would be with LiDAR data for example, but they are certainly still useful - and delivered at a fraction of the cost of LiDAR survey and conventional modelling. This could be crucial in areas with poor data provision.
Hydrology, statistics and the changing climate
The Met Office announced that for the UK, 2012 was the second wettest on record (link), and 4 of the 5 wettest years have occured since 2000. So the UK appears to be getting wetter - but does this translate into increased likelihood of flooding?
We're fortunate in the UK to have access to high quality national flow data through the National River Flow Archive website, run by the Centre for Ecology and Hydrology. I've done some quick analyses of these records using a method called Quantile Regression to see if there's any evidence for increasing extreme flows occuring over the last decades. Quantile regression is a bit like the usual least squares regression used to fit a straight line through the middle of a cloud of data, except it tries to pick out more extreme values.
The plots below show the results for the rivers Thames and Tay, with Quantile Regression used to find trend in the size of flood expected every 10 years or less. For the Thames, we actually see a downward trend, and a slight upward trend for the Tay. But the real story here is told by the upper and lower bounds on the trend lines - the spread is too large to draw significant conculsions about trends in the data. Essentially we don't have river flow records collected over a long enough period to see a significant increase in extreme flows.
So why aren't we seeing an increase in flooding, when the Met Office reports an increasing trend in extreme rainfall? It all depends how you define extreme. The Met Office define extreme rainfall as that which occurs every 100 days or less, but extreme floods are typically defined as those occuring every 100 years or less. The meteorologists' extreme events are therefore going to happen a lot more often, and it's easier to see the trends in the statistics.
Work continues on this - next thing to try is to see if aggregating data from the whole of the UK can reveal any significant trends.
Turbulence Modelling in Open Channels
Engineers have been using Manning's equation for over 100 years to estimate the capacity of open channels to convey water. It's a simple formula based on empirical results - and despite extensive criticism from researchers, engineers are still using it in hydraulic design calculations and modelling.
Why does Manning's equation work so well? I've developed a mixing length model which can explain why Manning's equation works, and can be used to extend its application to more complex hydraulics, such as flow through bridges. The next challenge is to include this model in shallow water flow modelling.
A paper on this has been published in Proceeding of the ICE - Water Management, here.
Numerical Modelling for River Flood Mapping
How accurate are "state of the art" hydraulic models? Or putting it another way - if we know the flow in a river channel, how well can we model water levels and flood extent? After the floods in Carlisle in 2005, a team from the University of Bristol and I collected a series of maximum water levels using GPS, and since then we've been using these to evaluate models of hydraulics in the river channel and floodplain.
We've been using the SFV (Simple Finite Volume) model, which was developed as a research tool to understand the hydraulic behaviour of river channels and floodplains. Using SFV we can build highly detailed models of river channels and structures such as bridges, and in Carlisle we've represented these features as a mesh of 30 000 triangles, some as small as 1m. This means we can represent flow around bridge piers explicitly, and hence estimate the effect of bridges in raising water levels upstream of the constriction.
In most studies like these, researchers use calibration - tweaking roughness values in the channel and floodplain to get the best fit between the model predictions and observed flood levels or extent. We've taken a different approach, where we've used information from gauging stations on the rivers Eden and Caldew to estimate roughness parameters.
|Validation of the SFV model for the
Carlisle 2005 flood event. The model agrees well with the observed
extent (in red).
These roughness values work well - water levels predicted for the 2005 event are within around 40cm of the measured levels. The errors in the model predictions are roughly the same magnitude as the errors in the measurement, which is as good as we can get with this validation data. There is also some evidence that the model does well in representing elevated water levels behind bridges.
|The SFV model predicts flood extents along the Eden fairly well, with a good match to the measured values (shown as squares). The model seems to show some skill in representing the water level drop across bridges.|
For more information on modelling the Carlisle 2005 flood event, see our paper in Proceedings of ICE - Water Management here.