InVEST Urban Cooling – Model Validation

Hello,

I am writing to add to the discussion of model validation procedures and share the result from a past project. During the InVEST Virtual Workshop 3: Urban InVEST at minute 21:46 they are discussing the performance of the model using the examples of the Twin Cities and Paris, France. A comparison of the model output air temperature against satellite derived land surface temperature (LST) is shown. The result of the analysis are r2 s in the range of 0.5 – 0.86.

cooling_workshop_r2.PNG

We repeated this analysis for the city of San Diego, CA and came up with a very different result. Our process used the following steps in ArcGIS Pro 1) Generate 10,000 random points within the AOI 2) Add surface information from both Air temp output raster and satellite derived LST raster (LST was derived from Landsat 8 Surface Reflectance Tier 1 product using Google Earth Engine (GEE)). 3) generate chart of modeled air temperature vs LST and calculate coefficient of determination.

modelair_temp_vsLST

While an r2 of 0.12 is not robust it was also no surprise given the difference in the variables we are comparing. First, surface temperatures vary much more than air temperatures so the differences in the ranges were predicted. Second, the way InVEST calculates air temperature restricts the range to that of the defined parameter Urban Heat Island Magnitude.

Tair_eq

This discrepancy is clearly visible in the graph where LST ranges from ~ 24 – 55 C while the air temperature output of InVEST ranges from ~ 34 – 37. While the low correlation could be attributed to a number of issues with model fit and input inaccuracies there still seem to be a problem with the two data sources that are being compared.

How did InVEST model researchers achieve an r squared of 0.86? What procedures did they use to look at the correlation between satellite data and modeled air temperatures? Are there any other suggested model validation techniques?

3 Likes

Hi @MerylK , thanks for describing your work in San Diego here! It’s always fascinating to see how well our models are performing in other study areas. While I can’t directly answer the question of how our Urban team achieved r^2=0.86 or what procedures they used, I suspect the details will be coming out in the peer-reviewed literature before too long. In the meantime, maybe @royremme @chris might be able to discuss some more specifics about this particular study?

James

1 Like