I would like to clarify my understanding of the RPI calculation…
I ran NDR twice, using the same precipitation raster; except for my second run, I used a smaller version of the raster that I clipped to the watershed boundary. All other input data was the same between runs. The unclipped raster covers about 30% more linear area than the clipped raster.
The NDR run that used the un-clipped precipitation raster had a higher total N export (1,379,693 kg), comparable to field data. The NDR run that used the clipped raster had a much lower total export, smaller than that realistic for the watershed I’m studying (616,837 kg).
The different surface loads seemed proportional to the difference in N export.
Un-clipped: 11,493,329
Clipped: 5,137,968 kg
My questions:
1. Given the calculation for RPI ( RPI= RPi / RPav), does NDR calculate the RPav as the average precipitation of all cells within the required watershed boundary shapefile input, or the total extent of the entire precipitation input raster?
2. Could the difference in raster extent be the reason for my substantially different results?
3. I’m in the process of calibrating my model. How can I choose the best extent of the precipitation raster that will give me the most accurate results? Is there a recommended extent?
Below are the logs for the model runs! Thank you in advance for any insight! @swolny@dcdenu4
By precip, I assume you mean the runoff proxy raster?
NDR calculates RP_{av} as the mean value of all non-nodata runoff proxy pixels within the watershed boundary vector. If you’d like to take a look at the values that affect this calculation, feel free to open up <your_workspace>/intermediate_outputs/cache_dir/aligned_runoff_proxy.tif.
If that’s the only input that’s changed, then it sounds like a likely culprit! You might try opening up the aligned*.tif rasters in the cache directory I mentioned above and make sure that the intersection of the regions looks like what you would expect.
When the model aligns its inputs, it ensures that the DEM, LULC and the runoff proxy raster are all aligned to have the same extents and the DEM’s pixel size, and also masking out any pixels outside of the watershed vector. If all 3 of these rasters already have matching projections, then the most likely explanation is that NDR is computing a different bounding box for these inputs and aligning them slightly differently.
So, it’s not so much that there’s a recommended extent, but I’d make sure that all 3 of these raster inputs completely overlap your watershed vector by a reasonable margin.
If there’s still an unexplained difference, could you share your inputs with me so I can try to take a closer look? Sharing a filesharing folder with my email jdouglass@stanford.edu might be easiest.
Thank you for your insight! I followed your advice, and I’m still seeing unexplained differences. If you are able to, I would appreciate any further insight you could provide. I emailed you my inputs via a OneDrive folder. My email is (tershya@wwu.edu). If I’m able to figure out the issue, I’ll summarize the solution here for other users. Thank you!
I finally had a chance to dig into this, and there’s a bug in the model when the runoff proxy raster does not have a nodata value defined. When this happens, the model will mask out values outside of the watersheds vector using a value of 0 rather than nodata, and these 0 values will then drive down the values that come out of the normalized runoff proxy index, which then propagates through the rest of the model.
The workaround is to ensure that a nodata value is set on the runoff proxy raster input. When a nodata value is set, the two input runoff proxy rasters yield almost identical model outputs.
But I’ll also be looking into this to see if we can assume a reasonable nodata value when one does not exist.