Understanding `load_n` parameter in relation to retention efficiency and nutrient application rate

Hi folks,

tl;dr - if I’m using a retention efficiency (ret_n) of 0.4 for an LULC, and I know the N application rate is 10 kg/ha/yr (as per the example in the user guide, and if the value for load_n that goes in the biophysical parameters table is the load of that LULC pixel on the one downslope as opposed to the loading on that pixel, why is the logic not to say “if my LULC is using 0.4, then it can export 1-0.4, that is 0.6, and so the value for the load_n entry for that LULC should be 0.6”?

Firstly, I’d like to acknowledge this great thread on this topic. I’m glad I’m not the only one who’s gone round in a circle (or two) on this. I’m sure I must have read an old version of the guide initially because I was sure I was doing the right thing by just directly using applied N values for load_n instead of transforming it using the retention efficiency ret_n. Thanks to @swolny 's update, I see that the definitive right thing to do by the software is multiply your N application rate by the retention efficiency. I just can’t get it straight in my head why it is this way round.

I read and reread this section:

Loads are the sources of nutrients associated with each pixel of the landscape. Consistent with the export coefficient literature (California Regional Water Quality Control Board Central Coast Region, 2013; Reckhow et al., 1980), load values for each LULC class are derived from empirical measures of nutrient export (e.g. nutrient export running off urban areas, crops, etc.). Alternately, if information is available on the amount of nutrient applied (e.g. fertilizer, livestock waste, atmospheric deposition), it is possible to use it by estimating the on-pixel nutrient use, and applying this correction factor to obtain the load parameters.

Distilling this down and paraphrasing, I get: “loads - (load_n) - are derived from empirical measures of nutrient export (from a pixel to downslope) and if actual application rate data is available, one can use it (actual application rate) by estimating on-pixel nutrient use (reflected in the retention efficiency - ret_n) and applying this correction factor (the retention efficiency) to obtain the load parameters”. Here I’ve inserted my understanding of what is being referenced by pronouns etc for clarity. Someone shout if they think anything is incorrect.

I find this confusing for a number of reasons

  • The use of “associated with” feels vague - is it the source of nutrients to a pixel or from a pixel?
  • I interpret this to be saying load_n is not equivalent to nutrient exported from a pixel to downslope
  • I interpret this to be saying the two are linked via ret_n, the retention efficiency
  • It does not clarify at this point what that relationship is, although you may attempt to intuit it
  • If, indeed, this is saying that (N) “loads” are quantified via the load_n column in the biophysical parameters table, it suggests to me that the correct value in that table should be the applied N, and if your input is actually the N exported from a pixel then you should back calculate the applied N (load) using the retention efficiency when creating your table

Nomenclature aside, if I’m thinking of loadings in terms of the source of nutrients associated with each pixel to downslope, and retention efficiency as being the fraction of nutrient utilized by a pixel, and I know 10 kg/N/ha is the application rate for a given LULC (to use the example in the user guide), and I’ve decided my LULC uses 40% of applied nutrient, why do I enter 4, (10 * 0.4), for the value of load_n for that LULC, which is actually the loading of that pixel on the one downslope, and not 6, (10 * (1-0.4))?

In other words, the above quoted passage still makes me feel that the value you should use in the biophysical parameters table is the actual N application rate (at best it’s not clear), and those people using the model coming at it with their input values being the N exported from a pixel to downslope should be the ones to convert their export values to loadings. On the other hand, if the parameter is indeed actually the loading of a LULC pixel to downslope, and we define app_N as the N application rate and ret_n as the fraction utilized by a pixel, why is the value entered in the biophysical parameters table app_n * ret_n and not app_n * (1 - ret_n)?

@gtmaskall , Thanks for taking the time to read all the discussions on this topic. I just read them myself so I’ll offer my interpretation,

I believe that the “load_n” value in the biophysical table is meant to represent the starting load placed onto the landcover from an external source, not the load moving from one pixel to the next downslope pixel.

So if 10 kg/ha/yr is applied via fertilizer, etc, and 0.4 is retained, then the effective loading onto the landscape is 4 kg/ha/yr. And then the model takes it from there by simulating flow & retention down the slope.

1 Like

Thanks @dave ,

That wording is helpful; I want to believe it. :slight_smile: What I mean is probably best illustrated if I write down my thinking.

So if 10 kg/ha/yr is applied via fertilizer, etc, and 0.4 is retained, then the effective loading onto the landscape is 4 kg/ha/yr. And then the model takes it from there by simulating flow & retention down the slope.

Okay, so here I struggle with the equivalence implied between “retention” and “loading”; an LULC with a higher retention efficiency puts a higher nutrient load on the landscape? If I dig out the relevant section from the user guide,

eff_[NUTRIENT] (ratio, required): Maximum nutrient retention efficiency. This is the maximum proportion of the nutrient that is retained on this LULC class.
The nutrient retention capacity for a given vegetation type is expressed as a proportion of the amount of nutrient from upslope. For example, high values (0.6 to 0.8) may be assigned to all natural vegetation types (such as forests, natural pastures, wetlands, or prairie), indicating that 60-80% of nutrient is retained.

Here, the implication is that high values are “good” and associated with good, healthy natural LULCs that we give a rousing good “hip hip hooray” to, and low values are associated with “naughty” LULCs that want to dump nutrient in your rivers thereby necessitating the whole “hey, maybe I could plant some riparian buffers of vegetation with high retention efficiency” modelling. Retaining is good, right? We like these LULCs because they hold onto nutrient and prevent it polluting your waterways. This interpretation of eff_N I get.

But if we have an N application of 10 kg/ha, and if we’re converting this to load_N for our biophysical table by multiplying it by our LULC eff_N, then we seem to be saying that our “good” and healthy natural LULCs (e.g. eff_N = 0.8) place a higher load (8 kg N) on our landscape than “worse” LULCs (e.g. eff_N = 0.4) that would be associated with a lower load (4 kg N) on our landscape.

I think I have some time now to sit and have another read and deeper digestion of the user guide.

I agree the language may be the most confusing part. I should say that I am not an expert on nutrient retention; I have a lay understanding based on reading the User Guide and forum threads.

I am thinking of the biophysical table’s “load” value as a way for the user to “seed” the model with nutrient values across the landscape. As @swolny pointed out in another thread, some of that nutrient will be retained (used by the vegetation) on the pixel at which it was applied, and then will not be available to move downslope. This retention on the “first” pixel is not modeled by NDR, and so it’s up to the user to adjust application rates first.

But this leads to the question that you posed earlier, if retention efficiency is 0.4, then 40% of the applied nutrient would be retained/used on the first pixel and 60% would be available to move downslope. So should the recommended adjustment to “load” be this:

application_load * (1 - retention_efficiency) ?

instead of,

application_load * retention_efficiency

1 Like

Yes @dave, I think you’re right, and thanks again for sanity-checking (!) It does make sense that the recommended adjustment should be

application_load * (1 - retention_efficiency) ?

I’ll update the User Guide again.

~ Stacie


Just a quick update on this. For an area I’ve been looking at, where I’ve created my own biophysical params table from some nominal N application rates, I scaled all retention efficiencies, eff_n, upwards by 10%. As I use the current advice of calculating load_n as N_applied * eff_N, this lead to a biophysical table where all load_n values were increased. This still disturbs me, but a strong sign that “something is wrong” would be if this resulted in the total N export increasing. It did not. The total N export decreased, as shown in this example subcatchment:

I’m still going to dive into the user guide (I got interrupted) again and still wonder if we shouldn’t be scaling thus:
application_load * (1 - retention_efficiency)
but at least inflating the retention efficiency, and thus load_n did not product an output where the N export had increased…