Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

Thanks for the advice. We are checking in that command the querys that take more time in execution, that's is right? In other hand, we have a lot of querys with "too long for logging" or something like that. Now is working fine so i can't give you a log yet
–
naxJul 17 '12 at 15:29

1

I think you mean that the query is too large to be logged - usually caused by the query being a huge $in list or similar. In that case you are either going to have to turn on profiling to get the details or work it out from the application side. Aaron is right - db.currentOp() is a good start to figuring out what is running at the time of slowness
–
Adam CJul 18 '12 at 0:05

Yes that was the output, but we finally used profiling to get the more expensive querys, and we are working to fix this! thanks !
–
naxJul 18 '12 at 12:04

1 Answer
1

First, let me say that with the information given it will be hard to get at the root cause - it is usually an iterative process that takes multiple attempts to track down the culprit. In the interest of answering your "what next?" portion of the question rather than identifying the root cause, read on.....

First, a couple of recommendations:

Get the host into MMS (it's free) - see http://mms.10gen.com - so you can graph stats over time and get a view of the issues without having to be sitting on the box running commands

Get munin-node installed too, so you can correlate ops etc. with IO (install docs for MMS explain this).

Next, a couple of quick checks for common causes:

What is your filesystem/kernel? - these generally need to be ext4/XFS and recent enough to have fallocate working (2.6.23 and 2.6.25 respectively) so that new file allocation is not slow

Assuming you don't get MMS and munin installed, get iostat output to match up with mongostat to determine if IO is the root cause for the bottleneck

Do you do any periodic batch updates that grow the documents significantly (i.e. that would cause moves)? Moves are expensive and can cause IO to get backed up

Is your disk up to the data volume you are writing to it? MongoDB fsyncs to disk every 60 seconds by default, if the volume that needs to be synced after 60 seconds is massive (say because of an insert spike) then you can also run into issues

That's not an exhaustive list, I have seen other issues cause this, but that should get you started down the right path.

well thanks! maybe i wasn't clear in what i looking for, but you give me the points to start! we found in top that the reading/writing was causing a bottleneck. So we gonna start fixing this, then we gonna host into MMS and check what querys need to be optimized, anyway we check all our querys in mongo and we gonna create index based on this. Thanks a lot!
–
naxJul 18 '12 at 11:55