During my master thesis i compared the influence of different grid-resolutions on the results. The model was not calibrated and i used just the default values. I compared 0.5m, 1m, 5m, 10m and 20m. The difference from the soil loss only is not that big. About 18% between 0.5m and 20m. The difference from the export results are significant.
It shows the sensitivity when it comes to grid resolution, but also just the required calibration for models like this.
Thanks for sharing this. Did you have DEMs for each of these resolutions or did you resample? I’d also be curious to look at the routing paths or created stream layers for each of these. I could see how having too fine a resolution could be problematic.
I had a DEM for the catchment (31km²) with a resolution of 0.5m x 0.5m (Airborne Laser Scanning). I just resampled the data.
so can you explain what causes these inaccuracies?
in the formula for the LS-factor the grid cell linear dimension D is used. this has obviously a major impact at the result.
Hi @bastian87, there are a lot of factors in play when the resolution changes. A few years ago we did a study on this aspect as well as other biophysical aspects of the SDR. You might find the paper helpful: https://www.researchgate.net/profile/Kim_Falinski/publication/311959706_Sediment_delivery_modeling_in_practice_Comparing_the_effects_of_watershed_characteristics_and_data_resolution_across_hydroclimatic_regions/links/5a18ccc10f7e9be37f9762ea/Sediment-delivery-modeling-in-practice-Comparing-the-effects-of-watershed-characteristics-and-data-resolution-across-hydroclimatic-regions.pdf I believe you can see the behavior you’re describing in Figure 5.
There are a number of issues you could point to as to the cause. @dcdenu4 raises a good point that a change in the DEM resolution could change the flowpaths, and this model is particularly sensitive to the landcover type the flowpaths trace. If a finer resolution routed through a few highly retentive pixels but then a coarser one changed that flow – or even worse, lost those highly retentive pixels during resampling – you could expect a significant change in your output. A good way to sniff out this issue is to compare the flow accumulation maps, stream maps, resampled landcover maps and see if there’s something visually obvious. You could also take the difference of an export layer map with the coarser map and look at the hotspots, maybe they point to an issue?
There’s always numerical discretization error to consider, but with your cell sizes I wouldn’t expect that to be an issue, you’d only see that if you got too coarse.
The last thing I think about is the mathematical model behind SDR. From what I recall, SDR is derived from “landscape level” measurements, say dozens of meters on a slope or something bigger on a small watershed. That would mean this model is not necessarily governed by an underlying PDE that could be continuously refined to have smaller and smaller cells for more accuracy. I suspect that SDR-like behavior would not be observed in an experimental setting for very small patches of landscape (say, less than 3 meters or so). I’m mentioning this since it seems your baseline was a 0.5 m grid cell. This is just my educated suspicion, but I’ve often wondered if having a little bit coarser grid size near the scale the science was defined would give a better modeling result for SDR.
Anyway, I hope that’s useful! And I’d love to take a look at your Master’s thesis if you’d like to post it here!