FileNotFoundError: [WinError 2]

Hi there!

I’m working with the HQ model using the latest version of InVEST, I’m almost positive all my input files are correct but I’m getting this error: FileNotFoundError: [WinError 2].
Here is the log from my latest attempt:

InVEST-Habitat-Quality-log-2020-10-09–10_06_22 (1).txt (1.3 MB)

Thank you so much in advance!


Hi @Lara,

Thanks for posting. This is a pretty strange issue because the call that is seemingly failing is trying to delete a file that has already been deleted. This looks like a possible race condition between two calls and maybe even a documented Python bug

I would ask if you could run the model again to see if you get the same behavior but I see that your data might be quite large seeing how long that convolution operation took. What is the size of the data you are working with?



Hi @dcdenu4, thank you for your fast response.

Yes, actually I am working with quite a big raster this time, it’s a 30MB TIF that contains a LULC map which covers an area of 2139 sqKm, with a spatial resolution of 3 meters (pixel size is 9 sq meters). Total data size including threat rasters and csv files is 153MB.

I was able to run the model succesfully earlier this year using a much smaller raster that covered a greater area but had a spatial resolution of 30 meters.

So I’ve been facing some new challenges with this new raster. I have been trying to do this in different computers thinking RAM and disk memory could be a restriction, but I don’t fully understand what this WinError is about. I am running the model again right now and will get back to you when it’s done.

Thanks for the advice!


Actually, when I check the raster size in ArcMap the uncompressed size is 400MB… And threat rasters vary between 400 and 800MB…

As @dcdenu4 mentioned, the WinError is an interesting issue and maybe even an underlying bug in the python language’s standard library. In any case, there’s a good chance that it was just a fluke and running the model once again will avoid the WinError. We’re looking into a longer-term fix in the meantime.

Based on your logfile, this looks like an extremely fine-scale resolution! The main thing that will affect the HQ model’s runtime is the convolutions, one per threat, where the kernel is based on the max_dist parameter in your threats table. For the ag_per threat, for example, the model needs to visit 3.27 million pixels (based on your max_dist) for every pixel in the threat raster. This number of pixels will impact the disk space used and, of course, computational time, and if you don’t really need the extra resolution, increasing your cell size will dramatically reduce the model’s runtime.

Running on a high-powered computer with solid-state hard disk(s) can help, of course, but you’ll only get linear speedups (as a function of processor speed and disk I/O) when increasing the cell size will have a much more pronounced effect on the algorithmic runtime.

1 Like

Hi there @jdouglass! Thanks for your comments, really helpful. So… I’m really interested in running the model with this particular resolution, since the habitat I’m studying has a very restricted extension, but I guess I could try to rescale the raster maybe to a 10 meter resolution and see if that helps. The other option would be to split the raster into several sections and try running them independently, but I’m not sure if that would improve the runtime.
I’m currently running it again and will wait for the results and go from there…

Thank you both for your advice. Will keep you posted…



Hi there! So I ran the model once more using the original 3 meter resolution raster and I think there was a power cut in my lab at some point because the process got interrupted and the log didn’t register any errors.

Next thing I did was resample all my files to 10 meters and I think the convolution runtime actually did improve, but I encountered several new errors. I’m attaching the log file here:

InVEST-Habitat-Quality-log-2020-10-20–11_37_12.txt (1.1 MB) .

What do you guys think?


Hi @Lara,

It looks like a memory error during convolution of the m_explot threat. I noticed that the Max_Dist for that threat is 47.57 km, is that right? With that large of distance the output kernel raster used for convolution is 8.8 GBs. We try to make our software as memory tight as possible but have probably never tried running this size of file before on a convolution.

We will certainly look into the memory error here and see if we can get a fix. In the meantime I think a work around would be decreasing that Max_Dist value by an order of magnitude and / or bump up the pixel resolution.

Sorry for these high resolution, large dataset issues!