NDR Model 3.6.0 subsurface inputs?

Hi there,

I am working through the NDR model and am just confused about the Subsurface Maximum Retention Efficiency. Are there any recommended values? I would guess that the higher the efficiency the less subsurface export, so if I want my model to reflect only surface runoff would I set this value to 1?

Second questions: Can anyone explain this to me better?

" Note 2: Load values may be expressed either as the amount of nutrient applied (e.g. fertil-

izer, livestock waste, atmospheric deposition); or as “extensive” measures of contaminants,

which are empirical values representing the contribution of a parcel to the nutrient budget

(e.g. nutrient export running off urban areas, crops, etc.) In the latter case, the load should

be corrected for the nutrient retention from downstream pixels of the same LULC. For ex-

ample, if the measured (or empirically derived) export value for forest is 3 kg.ha-1.yr-1 and

the retention efficiency is 0.8, users should enter 15(kg.ha-1.yr-1) in the n_load column of

the biophysical table; the model will calculate the nutrient running off the forest pixel as

15*(1-0.8) = 3 kg.ha-1.yr-1."

I am obtaining my export coefficients from empirical values so I should be scaling them like this?


1 Like


I have the exact same question and I couldn’t find any answer. Please let me know if you got help!


1 Like

Hi both, Sorry for the slow response!
On Question 1: for the model to reflect only surface runoff, you could set the value to 1, but you’d also need to make sure the critical_length parameter is set to a sufficiently short distance (because the model only reaches the set efficiency after a given distance). A cleaner solution is to set “proportion_subsurface_n” to 0 for all LULC types, in the biophysical table. This way all the runoff is transported on the surface.

On Question 2: the note you quoted relates to setting the load values in the biophysical table, depending on the type of empirical data you have available (either inputs to the system, or values representing the edge-of-field export of nutrients).
For a given LULC patch, the model will apply the retention efficiency to the N loads values. So if your “load_n” parameters already represent the edge-of-field value (running off the patch), the N export will be underestimated. Hence the correction (by a factor 1-eff) so the export from the patch reflects your empirical data. You can also check the paper referenced below for more details.

I hope this helps!

Redhead, J. W., May, L., Oliver, T. H., Hamel, P., Sharp, R., & Bullock, J. M. (2018). National scale evaluation of the InVEST nutrient retention model in the United Kingdom. Science of the Total Environment, 610–611. https://doi.org/10.1016/j.scitotenv.2017.08.092


Thank you, this definitely clears things up. I appreciate the help!


I am doing some trials with the NDR model.

If I have nutrients loads data (N and P) from literature and from local agricultural models ( all in kg/ha). Should I implement the correction?



Hi @Guille,

This thread is from last year, do you think you could make a new post on the forum with your question?



1 Like