I’ve been using the Habitat Quality model and have come across a strange result. I’m running a scenario that considers both current and future land cover. The future land cover represents a potential development, which in this case is natural gas. The only change between the current and future land cover rasters is this development, which also represents a significant threat that all habitat-suitable land covers are sensitive to. Yet, when I run this scenario, I end up with a summed future habitat quality score that is greater than the current, less developed land cover. From what I can tell, this seems like it should be an impossible result. Any idea what might be going on here? I’ve saved my model set up as it was entered into the Workbench here.
What do you mean by “a summed future habitat quality score”? Are you aggregating the habitat quality scores within some area of interest? If so, I definitely don’t recommend doing that, since the Habitat Quality scores are relative rankings. Do you see a difference in the quality output maps between scenarios per pixel, for example if you subtract one from the other?
I have been summing the habitat quality scores within an area of interest, defined by a polygon. In terms of evaluating change in habitat quality, the result would be the same whether you took the sum of the difference between each pixel or the difference in the sum of the pixels over an area of interest, no?
In any case, I have subtracted the two and there are differences, which should be the case, as they represent two different LULC scenarios. What doesn’t make sense is that the future scenario with more threat landcover has pixels with greater habitat quality scores, including in the areas adjacent to the LULC change.
I’ve uploaded images that show the current and future LULC, where the bright pink cells represent the change in LULC. I also uploaded an image of the difference in habitat quality between the future and current scenarios (future minus current), where green cells represent negative values (i.e., lower future quality), and yellow, orange, and red cells represent positive values (i.e., greater future quality), and the black cells show the LULC change.
I’m not at all an expert on this model, and don’t know what constitutes habitat versus not for the species you’re modeling. But my initial thought on the spatial pattern is that there’s higher habitat quality around the new agriculture areas because of the relative nature of the model. Now that there’s more threat around the center, and thus a lower quality, other areas must receive a higher quality when the threat rasters are normalized.
I’d be curious to know what the degradation maps look like, and if they make sense, since those maps are used to create the quality results, and they should show the collective threat more directly than the quality maps do.
Thanks for the reply and sorry for the late response. Turns out I forgot to update my threat rasters after updating my LULC, which meant that in some cases multiple threats were present for the same grid cell in the current scenario. This lead to greater degradation and ultimately, lower quality relative to the future scenario. After updating the threat rasters, my results look reasonable.