I’m using the NDR model to assess nutrient export in relation to changes in precipitation.
I’m using a raster showing the average annual precipitation over my study area. This is my Nutrient runoff proxy. In QGIS, I created a new raster file that shows an increase of rainfall of 25%.
In other words, I ran the NDR model twice: using the “average annual precipitation” as proxy and the “average annual precipitation +25%” as another proxy.
The problem is that the results of nutrient exports are almost identical between the two runs. It seems to me that an increase in precipitation does not change nutrient export, probably because this proxy represents “the spatial variability in runoff potential”, but I wonder if I’m correct.
I utilized different DEMs with different resolutions, the coordinate systems are correct, and I also changed the Threshold flow accumulation values.
Thank you for your help.
Hi Marco -
First, just wondering if only the patterns look the same, or if the underlying values are also the same.
Did you increase all pixels in the precipitation raster by 25%? If so, I think you’re correct to note that the runoff proxy represents spatial variability. If you don’t have spatial variability in the change in precipitation, then the model results won’t change. This is related to what the User Guide says about the calculation of RPI:
RPIi is the runoff potential index on pixel i. It is defined as:
RPIi = RPi / RPav
where RPIi is the nutrient runoff proxy for runoff on pixel i, and RPav is the average RP over the raster.
So since it’s creating an index in this way, if everything changes by 25%, it seems like you’d still end up with the same index. You can check this by looking at the model output intermediate_outputs/runoff_proxy_index.tif.
I have a related question. I ran NDR with two precipitation rasters from different sources. The datasets are from the same year, have the same natural resolution, same units, and are both projected into the coordinate system I’m using in my study.
There are slight differences in the average precipitation values in the rasters, which seems understandable since they are from different sources. However, one of the rasters covers a much larger extent than the other. I ran NDR twice, using each raster with all other input data the same. The NDR run that used the much larger precipitation raster had a substantially smaller total export, lower than that realistic for the watershed I’m studying.
My questions: Given the calculation for RPI ( RPI= RPi / RPav), does NDR calculate the RPav as the average precipitation of all cells within the watershed boundary shapefle, or the total extent of the entire precipitation input raster?
Could the difference in raster extent be the reason for my substantially different results?
I’ve attached the run logs. Thank you in advance for any insight!
NDR run with smaller raster: InVEST-Nutrient-Delivery-Ratio-Model-(NDR)-log-2022-01-04–11_10_20.txt (22.4 KB)
NDR run with larger raster:
InVEST-Nutrient-Delivery-Ratio-Model-(NDR)-log-2022-04-01–16_10_46.txt (22.1 KB)
@atershy I think we might already be discussing this over in NDR: Runoff Proxy Index. Spatial extent of precipitation raster, so I’ll close this topic.