Connection refuse error

Hello,

I’m writing to report an issue I’ve encountered while frequently using the recreation and tourism model. Initially, I was working with large regions, but due to long processing times, I decided to divide these regions into smaller sub-areas.

The problem is that one specific sub-area, despite being smaller, causes the model to freeze for hours during execution, getting stuck at the “zipping TUD” stage. I’ve tried several times, but the model only runs at a 20 km grid resolution for this area, whereas the other sub-areas run at 1 km without issue.

Today, I attempted to run it again to see if the issue had been resolved, but I received an error message and the model has not worked since.

Thank you very much in advance.

Best regards,
Alondra
InVEST-natcap.invest.recreation.recmodel_client-log-2025-06-12–12_15_42.txt (4.1 KB)

Hi @alcarilu , thank you for reporting this. This logfile indicated that the server was offline. I have restarted it.

The problem is that one specific sub-area, despite being smaller, causes the model to freeze for hours during execution, getting stuck at the “zipping TUD” stage.

This problem was not exhibited in the logfile you shared. If you can share another log where this occurred, that would be helpful to understanding the problem. As you suggest, the problem is likely related to the size of the requests being made, but I would like to look into it further if you can share the log from when this occurred.

Thank you,

Thank you very much. I am attaching the file of my sub-area. Previously, I was able to obtain the visitation rate even up to 1 km from this same area, but now it only works up to 20 km. If I try at 2 km, it gets stuck on ‘zipping pud’ for hours.
InVEST-recreation-log-2025-06-11–20_49_57.txt (9.7 KB)

Thanks for sharing this log. This particular area does not seem very large at all, so I’m guessing it may have been a coincidence that it got stuck when it did. Other much larger requests happening simultaneously on the server may have bogged things down and caused the issue. We’re looking into this as well.

1 Like