Habitat Quality: Distance of threats in the habitat/cover

Dear InVEST team

My name is Diego and I have 2 questions related to distance of threats in the habitat:

1) According to the InVEST Guide, the distance of the threat in the habitat can be either Exponential or Linear.

Based on that, the questions related to the distance of the threat in a cover/ecosystem/habitat, etc. are formulated.

However, when consulting the supplementary material of 2 papers, I have found the following approaches.


I don’t know in which of the 2 there may be an error in the approach, but following the InVEST guide then the correct formulation would correspond to image 1 right?

2. Regarding the calculation of the distance of the threats in the cover or habitat (euclidean distance):

I am working in QGIS and I have found these options to do it (see images below) but I don’t know which one would be more accurate in this case.


Thank you very much for your help, this would clarify a lot of things in my research.

Best regards,
Diego

I think I have found how it works:

A. Regarding the distance of threats the right statement is:
Which threat is distributed quickly to the surrounding ecosystem? (Linear)
Which threat is distributed quickly to the surrounding ecosystem? (Exponential)

B) The option Proximity ( Raster distance) is the right one because
It calculates distances from every cell in a raster to the nearest threat, which aligns well with how the model evaluates spatially continuous threats.

Hi @Diego_Guarin -

Sorry for the slow reply, everyone at NatCap is frantically preparing for our symposium next week.

I’m a little confused by what you are trying to do using the Distance Matrix or Proximity tools in QGIS. When using the Habitat Quality model, you only need to specify whether you want to use a Linear or Exponential decay function for each threat, and specify the distance over which the threat happens, and the model will do all of the calculations to apply whichever decay function you choose to the threat input layer. So you don’t need to do the GIS processing yourself.

~ Stacie

Dear Stacie,

Thanks for your answer

Sorry, if perhaps the question was a bit confusing.

First, I would like you to confirm whether the following statements are correct: if a threat is quickly distributed, it is linear, and if it affects in the long term, it is exponential.

Second, when reviewing the (HQ) habitat quality workbench, I noticed that the decay equation option (linear or exponential) is not there. That option is available in the habitat risk assessment (HRA) tool. I am using version 3.13.0.

This is why I was asking if in the HQ model it is necessary to process the distance using GIS. But, if that information is already in the threat tables csv, then the HQ model will process that table automatically.

I hope I have been a little clearer about these questions.

Many thanks again and good luck in the symposium.

Best,

Diego

Hi Diego -

You specify the decay equation for each threat in the Threats Table input. The User Guide describes and gives an example of this, and you can also see an example in the sample data we provide.

For linear versus exponential, I would describe it something like this: For a given maximum distance (max_dist in the Threats Table) that the threat is active over, if a threat is mostly present near the threat itself, then tapers off quickly with distance, then I would use “exponential”. This might apply to something like a paved road, where people are mostly driving along it, and stay near it, and their effects mostly happen near the road (garbage, collisions with animals, etc). Some people and effects might go further (like noise and air pollution), but this is much less than the effects close to the road.

Then “Linear” indicates that the threat tapers off more gradually the further away you go from the threat. I might think of something like a hiking trail, where people often go off trail, usually staying close to the trail, but it’s easy for people to go further and further, leaving garbage, stomping vegetation, etc.

Not sure this helps, and it’s a somewhat subjective parameter to select. You could try both for a given threat and see if one or the other results make more sense to you once you see the result.

~ Stacie

1 Like

Thanks so much, Stacie

Now it is clear to me

Then I would like to show you a scheme based on it to calculate “MAX_DIST” for the threats table.

I hope this approach to calculate the MAX_DIST is on the right track.

Best,

Diego

Hi Diego -

Well, this is not something I’ve parameterized before, and honestly I’m not sure exactly what you’re trying to do. Usually, the distances come from some sort of literature search, or other research (or perhaps just intuition) that shows the distance effect of, say, cattle ranching on grassland. Maybe this includes a buffer around the ranching area where vegetation is somewhat degraded, or species won’t use because they don’t like the clearing/cattle/disturbance.

What exactly do the values in “Composite Distance table” represent? It sounds to me like the actual distances you need to use for the model are “200m, 500m, or even 1km away”, not those same values added to whatever’s in the Composite Distance table.

~ Stacie

Hi Stacie,

The composite table contains the distances obtained from various sources (literature and experts) for each land cover in the area. What I did was average these distances based on the opinions of experts, resulting in the parameter for the maximum distance.

My question is, for example, if the maximum distance for the mining threat is 3 km or deforestation is 4 km; could we use a buffer of 1 km in the model, considering a scenario of maximum expansion in the worst case? Would that be necessary?

Additionally, I have a concern regarding the calculation of the weight parameter . I have obtained it using the AHP methodology, as recommended and used in several related studies.

However, in my case, some people mentioned that completing the AHP matrix can be tedious, so they are reluctant to fill it out or even review it.

I assume there would be no restriction in using other less “complex” but valid methods that provide similar results, correct?

Thanks so much again.

Best,
Diego

Hi Diego,

I work with Stacy at NatCap and i am the science lead for that model. I got a chance to read your threads and based on what you are trying to do.

Given the variability in the spatial impact of the stressors for each habitat that you could do a sensitivity analysis instead of averaging them. For instance, you have a very conservative model where you use the lowest distances and a more extreme scenario where you use the highest distance values you gathered from the literature. Then look at how that habitat quality maps vary spatially. It might help determine some areas where more investigation may be needed. *

I would also play with the sensitivity of that habitats to these threats, as some of them may be impacted over a smaller footprint but very sensitive to these human activities. This would get at how the habitat respond ecologically to the threat rather than the spatial dimension of where human activities impact habitats.

Ultimately it depends on what is the question you are trying to answer and designing your modeling approach accordingly and where uncertainty lies… doing some sensitivity testing.

I hope this helps.
jade

Hi Jade, thanks so much for your answer,

I think I was confusing myself a bit, so I would like to show you with an example if I have grasped your ideas. I hope so.

Let’s take as an example, in the following tables:

Here are my first 2 questions:

  1. For example, that value of 0.4 in illegal settlements has been averaged (obtained by AHP methodology) by data provided by various experts or available literature.

  2. In my case, not all actors want to fill in an AHP matrix, so I was asking if, for example, using a direct rating scale, which can also be averaged and normalized, or another less complex method, can be used in this case.

In the following table:

Continuing with the example of the threat of illegal settlements:

  1. The WEIGHT data, 0.37 is the average of the complete data from the above table, the sensitivity table, right?.

Because the supplementary material of some papers it is mentioned that it was obtained in this way, but when reviewing some tables, the data do not match at all.

  1. Regarding the MAX_DIST data in this case, as they also come from several sources, that is why I have averaged them, according to what is mentioned in the literature, but considering your explanation, I would like to know if I could do the following:

Thank you very much for your help in this part and if I could contact you maybe to ask you to be a co-author of this study it would be great.

Best regards

Diego

Hi Diego,

To answer your first question: Yes you can use a direct rating scale, which can also be averaged and normalized, or another less complex method if you have one on deck.

re max distance: Your proposed approach makes sense. Thank you for breaking down the steps clearly. I suggest you test both approaches to see how the results differ or not. It is always a good practice that way you will know what is the impact of doing one vs the other and you can even explain that in your paper or to the reviewers if one comes back to you asking you about a simpler (option 2) vs more complex (option 1) approach.

thank you for inviting me as a co author but it wont be necessary. I just answered a couple of questions that you have… so not a significant contribution.

Warmly
jade

1 Like

Dear Jade,

Many thanks again, now it is clearer to me, definitely.

Best regards,

Diego

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.