Just upgraded the SQL Cluster's 7.2.x to 7.3 and am STILL waiting for the only connection defined to the UI to refresh, while actually RDP'd to the primary node.

The server components were installed first, then the UI started, which gave me the updates-ready notification, so downloaded, installed and started the updated UI. Said YES to the Cache Compress that appears compulsory.

I can see jobs, in progress, but the green-spinner just keeps on spinning...

We've had one or two reports of this in the latest version and I'm not sure the underlying cause has been identified so far. In short, it seems like the local data store is encountering trouble after the cache compress.

You may find it springs into life after some time - how long have you left it?

If it's still not working after being left alone for ages, the quickest way to get things working again is to let it create a fresh file. It'll import history from SQL Server, but you'll lose the more detailed history specific to SQL Backup (such as compression rates etc.)

To do this, stop the SQL Backup service, and then locate the data.sdf file. On a cluster, this is usually installed to a shared folder between the nodes somewhere, and on a single server would be in c:\\programdata\\red gate\\sql backup\\data\\<instance name>. Rename / move the file elsewhere, and then start the service again. A new file should get created, and things should come back to life.

We're currently investigating this; as soon as I hear more I'll post back. I'm not sure there's a way to skip the compress option (we actually are also adding a couple of extra fields to the data store too, so even trying to trick it by keeping a copy and putting it back afterwards will most likely not work...)

PDinCA wrote:Can we say NO to the Cache Compress? I have FIVE more machines to upgrade and don't want to have to fudge around on all ten machines, so far...

Hi,

You can delete the server.dat and {number}.dat files from:
C:\\Users\\USER.NAME\\AppData\\Local\\Red Gate\\SQL Backup\\Server Data (on the workstation, not server) to bypass the cache compact step.

The path mentioned in a previous post is on the machine hosting SQL Server and performance will dramatically improve if you purge it, however I would clear the local cache files at the same time for good measure.

I recall you've had problems in the past with the UI - if you note the size of your .sdf and .dat files before purging them, I'll make sure our test files are at least this large if they're not already.

Is this in relation to the server where you already removed the data.sdf file? If so, I'm not sure what else to try; it may just be slow because it's rebuilding the cache file but if it's sticking once again there's something more underlying going on and we'll need advice from the dev-team on it (you're not the only person seeing this problem, a couple of other users have reported similar)

Ah, okay- so, if that file is quite large it'll contain a lot of history which will take some time to repopulate.

If it turns out that one does need removing then I guess the issue is indeed that stopping the service may fail the cluster over as the backup is seen as a cluster resource... on the installation notes here it does seem that we recommend a policy to restart it on the *current* node though, rather than failing over, so you may be OK. The other thing is whether you can temporarily remove / ignore the SQL Backup service from the clustering side of things. I'd test this out myself, but unfortunately I don't have a cluster here to try it on