I don’t know in which of the 2 there may be an error in the approach, but following the InVEST guide then the correct formulation would correspond to image 1 right?
2. Regarding the calculation of the distance of the threats in the cover or habitat (euclidean distance):
I am working in QGIS and I have found these options to do it (see images below) but I don’t know which one would be more accurate in this case.
A. Regarding the distance of threats the right statement is:
Which threat is distributed quickly to the surrounding ecosystem? (Linear)
Which threat is distributed quickly to the surrounding ecosystem? (Exponential)
B) The option Proximity ( Raster distance) is the right one because
It calculates distances from every cell in a raster to the nearest threat, which aligns well with how the model evaluates spatially continuous threats.
Sorry for the slow reply, everyone at NatCap is frantically preparing for our symposium next week.
I’m a little confused by what you are trying to do using the Distance Matrix or Proximity tools in QGIS. When using the Habitat Quality model, you only need to specify whether you want to use a Linear or Exponential decay function for each threat, and specify the distance over which the threat happens, and the model will do all of the calculations to apply whichever decay function you choose to the threat input layer. So you don’t need to do the GIS processing yourself.
Sorry, if perhaps the question was a bit confusing.
First, I would like you to confirm whether the following statements are correct: if a threat is quickly distributed, it is linear, and if it affects in the long term, it is exponential.
Second, when reviewing the (HQ) habitat quality workbench, I noticed that the decay equation option (linear or exponential) is not there. That option is available in the habitat risk assessment (HRA) tool. I am using version 3.13.0.
This is why I was asking if in the HQ model it is necessary to process the distance using GIS. But, if that information is already in the threat tables csv, then the HQ model will process that table automatically.
I hope I have been a little clearer about these questions.
You specify the decay equation for each threat in the Threats Table input. The User Guide describes and gives an example of this, and you can also see an example in the sample data we provide.
For linear versus exponential, I would describe it something like this: For a given maximum distance (max_dist in the Threats Table) that the threat is active over, if a threat is mostly present near the threat itself, then tapers off quickly with distance, then I would use “exponential”. This might apply to something like a paved road, where people are mostly driving along it, and stay near it, and their effects mostly happen near the road (garbage, collisions with animals, etc). Some people and effects might go further (like noise and air pollution), but this is much less than the effects close to the road.
Then “Linear” indicates that the threat tapers off more gradually the further away you go from the threat. I might think of something like a hiking trail, where people often go off trail, usually staying close to the trail, but it’s easy for people to go further and further, leaving garbage, stomping vegetation, etc.
Not sure this helps, and it’s a somewhat subjective parameter to select. You could try both for a given threat and see if one or the other results make more sense to you once you see the result.
Well, this is not something I’ve parameterized before, and honestly I’m not sure exactly what you’re trying to do. Usually, the distances come from some sort of literature search, or other research (or perhaps just intuition) that shows the distance effect of, say, cattle ranching on grassland. Maybe this includes a buffer around the ranching area where vegetation is somewhat degraded, or species won’t use because they don’t like the clearing/cattle/disturbance.
What exactly do the values in “Composite Distance table” represent? It sounds to me like the actual distances you need to use for the model are “200m, 500m, or even 1km away”, not those same values added to whatever’s in the Composite Distance table.
The composite table contains the distances obtained from various sources (literature and experts) for each land cover in the area. What I did was average these distances based on the opinions of experts, resulting in the parameter for the maximum distance.
My question is, for example, if the maximum distance for the mining threat is 3 km or deforestation is 4 km; could we use a buffer of 1 km in the model, considering a scenario of maximum expansion in the worst case? Would that be necessary?
Additionally, I have a concern regarding the calculation of the weight parameter . I have obtained it using the AHP methodology, as recommended and used in several related studies.
However, in my case, some people mentioned that completing the AHP matrix can be tedious, so they are reluctant to fill it out or even review it.
I assume there would be no restriction in using other less “complex” but valid methods that provide similar results, correct?
I work with Stacy at NatCap and i am the science lead for that model. I got a chance to read your threads and based on what you are trying to do.
Given the variability in the spatial impact of the stressors for each habitat that you could do a sensitivity analysis instead of averaging them. For instance, you have a very conservative model where you use the lowest distances and a more extreme scenario where you use the highest distance values you gathered from the literature. Then look at how that habitat quality maps vary spatially. It might help determine some areas where more investigation may be needed. *
I would also play with the sensitivity of that habitats to these threats, as some of them may be impacted over a smaller footprint but very sensitive to these human activities. This would get at how the habitat respond ecologically to the threat rather than the spatial dimension of where human activities impact habitats.
Ultimately it depends on what is the question you are trying to answer and designing your modeling approach accordingly and where uncertainty lies… doing some sensitivity testing.
For example, that value of 0.4 in illegal settlements has been averaged (obtained by AHP methodology) by data provided by various experts or available literature.
In my case, not all actors want to fill in an AHP matrix, so I was asking if, for example, using a direct rating scale, which can also be averaged and normalized, or another less complex method, can be used in this case.
Continuing with the example of the threat of illegal settlements:
The WEIGHT data, 0.37 is the average of the complete data from the above table, the sensitivity table, right?.
Because the supplementary material of some papers it is mentioned that it was obtained in this way, but when reviewing some tables, the data do not match at all.
Regarding the MAX_DIST data in this case, as they also come from several sources, that is why I have averaged them, according to what is mentioned in the literature, but considering your explanation, I would like to know if I could do the following:
To answer your first question: Yes you can use a direct rating scale, which can also be averaged and normalized, or another less complex method if you have one on deck.
re max distance: Your proposed approach makes sense. Thank you for breaking down the steps clearly. I suggest you test both approaches to see how the results differ or not. It is always a good practice that way you will know what is the impact of doing one vs the other and you can even explain that in your paper or to the reviewers if one comes back to you asking you about a simpler (option 2) vs more complex (option 1) approach.
thank you for inviting me as a co author but it wont be necessary. I just answered a couple of questions that you have… so not a significant contribution.