SQL monitor not collecting disk avg.read time and avg.disk queue length statistics for one drive (F: ) in a monitored server. Other logical disk counters are collected for the drive. For other drives in the machine, all statistics are collected.
Any idea what is causing this and suggestions for fix.

Brian,
Yes, we are monitoring a cluster with F: shared, but looking at the currently active node. The statistics for other counters like disk avg.write time etc.. are collected by SQL Monitor for F: drive. So, the drive is accessible.. Only the two avg read time and disk queue length are not collected.

Anyone ever figure this out? Having the same issue. Data is getting collected for a clustered drive for everything except Read Time. Seems to be looking at the passive node and causing access denied errors there. How can I tell SQLMonitor the drive is currently on the other node for that sensor?

Apparently this is a known bug (SRP-9314) in SQL Monitor. The response I got from Red Gate in ticket "Unable to get Disk avg. read/write time for certain drives" was that they are aware of the bug but cannot say when it will be looked at.

Removing the server, letting the data purge, and then re-adding the cluster did not work for me. This is frustrating. I'm going to open a ticket with Redgate with this issue (so they have another instance to add to their issue tracking), and then consider starting from scratch..

Funnily enough the disk read/write time stats on the drives in my cluster started getting populated again. Not all at once though. They came trickling in; one day the Disk Avg. Read time for one drive started getting populated, few days later the Disk Avg. Write time for another drive and so on so forth until all drive statistics were back.

No guarantee it's going to stay the same the next time we perform a cluster failover though!

One thing I noted when removing the cluster and re-adding it was that before, it had cluster name, then under that, the nodes listed separately. And we could click node1, see stats, then node2, different stats for CPU, memory, etc, and so on with all nodes. After re-adding it, it was simply the cluster name with no nodes listed under it. So it appears they changed the way it handles clusters in some version or another. And my guess would be that the issue we're running in to is tied to that change somehow. Only a guess of course

Glad to hear yours is sorting itself out without having to lose any other data! I'll post back again to confirm that starting fresh 'fixes' it for us. How could it not though? New instance, new DB... if it doesn't, I'll really start getting concerned...