Whenever I delete events from the DFM database the dbsrv10.exe process ramps right up and more occasionally than not an alert is generated in DFM saying that the "Management station load is too high". DFM then pretty much is unresponsive for up to 10 minutes.

Has anyone else had poor performance issues with DFM / server. Does the DFM database require some kind of cleanup or tuning? Current size of monitordb.db file is 13.2GB. Any advice much appriciated. Cheers Ian

Re: Poor performance - DFM database server

We are running DFM but an older version 4.0.2 and notice the same thing, performance is very bad when listing/deleting/selecting alerts, when this is done DFM host CPU goes to 100%. I was hoping our performance would improve when we upgrade to Oncommand 5 which I thought changed to 64bit architecture but from your experience it still looks bad.

DFM interface is very slow and inefficient to use like this, I believe you have to log a call to "prune" the DB which just seems ridiculous (it should have this built in). Our monitordb.db is 9GB but we are only monitoring 4 filers.

Did you guys find some kind of solution? We have tried to purge our DB with no difference in performance (took about two hours for a 7 Gb DB including purge/reload if someone want´s to know). According to NetApp support our dfmserver process is maxing out memorywise. I think there is a 2 Gb cap/process as DFM 4.0.2 is a 32-bit application.

Re: Poor performance - DFM database server

The slowness is not due to any bug. But it could be related to multiple configuration issues. What version of OCUM/DFM are you running currently ?

Is your server a VMware ? If so what is the memory and CPU configuration ? Are they reserved or allocated ? How many dataset are currently there ? How many controllers are being monitored ? Is Performance Advisor also Enabled on the same server ?

Answers to many of the above questions affect what you described. In general 5.0.2P1 is a very stable release with no known memory or performance or functional issue.

Regards

adai

25Replies

0
Likes

Re: Poor performance - DFM database server

Thanks for getting back to me. I have been working on this for a while, even opened a case couple of times but nothing has really helped. Just installed the 5.0.2P1 patch which was running good for about 5 mins after which it has slowed down again. I am constantly seeing at least one core of the server being pegged and the dbsrv10.exe process using about 25% CPU. It seems to me the delay is being caused by the sybase database. Prior to installing the 5.0.2P1 patch I did use the dfm purge utility as well.

What version of OCUM/DFM are you running currently ?

5.0.2P1 on Windows 2008 R2 SP1 x64

Is your server a VMware ?

Yes

If so what is the memory and CPU configuration ?

4 vCPU, 6GB RAM (played around with different configs to test but none have helped)

Are they reserved or allocated ?

Yup, I have 5GHz reserved for CPU and 6GB for RAM

How many dataset are currently there ?

113

How many controllers are being monitored ?

8

Is Performance Advisor also Enabled on the same server ?

Was initially, I disabled it in hopes of improving performance, but didn't help.

Re: Poor performance - DFM database server

From which version did you upgrade from ? How long was it since you upgraded ?

After upgrade to 5.0.x we do the following.

1.Purge all data proteciton jobs older than 90 days which happens during everyday @ midnight. So soon after upgrade due to the amount of jobs to be purged you would encounter slowness for a week or so.

2.We prune all perf data files for stale instances which starts every sunday@ midnight and runs once every week. Since you have perf data which definitely will have stale entries due to mark-deleted objects this will also consume resources in your dfm server.

since all this 3 happen during midnight I think these are the reasons why you feel slowness. But 1 & 2 should stabilize in a week or 2 and you should be able to see marked improvements in you dfm server.

I would also recommend you to upgrade your RAM to 16GB or so since you are running 100+ datasets.