I have huge performance issues using MongoDB (i believe it is mmapped DB) with ZFSonlinux.

Our Mongodb is almost only writes. On replicas without ZFS, disk is completely busy for ~5s spikes, when app writes into DB every 30s, and no disk activity in between, so i take that as the baseline behaviour to compare.
On replicas with ZFS, disk is completely busy all the time, with the replicas stuggling to keep up to date with the MongoDB primary. I have lz4 compression enabled on all replicas, and the space savings are great, so there should be much less data to hit the disk

So on these ZFS servers, i first had the default recordsize=128k. Then i wiped the data and set recordsize=8k before resyncing Mongo data. Then i wiped again and tried recordsize=1k. I also tried recordsize=8k without checksums

Nevertheless, it did not solved anything, disk was always kept a 100% busy.
Only once on one server with recordsize=8k, the disk was much less busy than any non-ZFS replicas, but after trying different setting and trying again with recordsize=8k, disk was 100%, i could not see the previous good behaviour, and could not see it on any other replica either.

Moreover, there should be almost only writes, but see that on all replicas under different settings, disk is completely busy with 75% reads and only 25% writes

(Note, i believe MongoDB is mmapped DB. I was told to try MongoDB in AIO mode, but i did not find how to set it, and with another server running MySQL InnoDB i realised that ZFSonLinux did not support AIO anyway.)

What could be going on there ? What should i look to figure out what ZFS is doing or which setting is badly set ?

EDIT1:
hardware: These are rented servers, 8 vcores on Xeon 1230 or 1240, 16 or 32GB RAM, with zfs_arc_max=2147483648, using HP hardware RAID1. So ZFS zpool is on /dev/sda2 and does not know that there is an underlying RAID1. Even being a suboptimal setup for ZFS, i still do not understand why disk is choking on reads while DB does only writes.I understand the many reasons, which we do not need to expose here again, that this is bad and bad, ... for ZFS, and i will soon have a JBOD/NORAID server which i can do the same tests with ZFS's own RAID1 implementation on sda2 partition, with /, /boot and swap partitions doing software RAID1 with mdadm.

More information. What type of server is it? Specific model. Do you have a write cache on the RAID controller?
–
ewwhiteMar 21 '14 at 15:34

How big is the data set? basically Mongo recommends that the data set fits in RAM otherwise use sharding across servers. It also recommends a "readahead size of 32 or the size of most documents" info.mongodb.com/rs/mongodb/images/…
–
LinuxDevOpsMar 21 '14 at 15:47

May I add, what are you gaining by using ZFS in this case? Is it just about compression? If so, maybe a zvol approach with a supported filesystem on top would make more sense.
–
ewwhiteMar 25 '14 at 14:31

Because XFS performs well and eliminates the application-specific issues I was facing with native ZFS. ZFS zvols allow me to thin-provision volumes, add compression, enable snapshots and make efficient use of the storage pool. More important for my app, the ARC caching of the zvol reduced the I/O load on the disks.

That is interesting and worth trying. Your zpool creation cmd line reminded me that my first tests that showed small disk activity were with ashift=12. My latest tests with disastrous iowait were without, because i did not know what were the disks behind the hardware raid. Maybe that is a big part of the solution
–
Alex FMar 30 '14 at 21:30

@AlexF I use ashift=12 for most disk solutions (RAID controller or no), ashift=13 for PCIe SSD.
–
ewwhiteMar 30 '14 at 21:33

@AlexF: I agree that sector alignment might be another possible issue. Unaligned sector access is several times slower for 4k hard drives (almost any HDD currently sold today). If your ZFS partition is not aligned on a multiple-of-8 sector boundary your ZFS volume may trash disk accesses, because every write would then incour in the unaligned write penalty. Unfortunately many disks will report 512 byte sectors, though they use 4096 byte physical ones. Usually I always use 4k alignment by default, very little space gets sacrificed, and 512byte drives won't notice the difference.
–
Fabio ScaccabarozziMar 31 '14 at 10:52

1

That works rather well now, thank you all for your help
–
Alex FApr 7 '14 at 21:43

1

I managed to get NORAID/JBOD on the rented server. Configuration is soft raid1 with mdadm except for one, which is of course ZFS's own raid1 with ashift=12. Prefetching disabled, ARC max 1/3 of RAM, compress lz4, zfs_txg_timeout=5. XFS Zvol created with volblocksize=128K, mounted with noatime,logbufs=8,logbsize=256k as well. I can fsyncLock Mongo(), xfs_freeze and zfs snapshot. I automatically send snapshots diffs hourly to the staging machine so that it verifies content by mounting it and we are confident that snapshots are correct. I also clone the snap, mount+fsck, and send to S3.
–
Alex FApr 9 '14 at 18:47

First off, it's worth stating that ZFS is not a supported filesystem for MongoDB on Linux - the recommended filesystems are ext4 or XFS. Because ZFS is not even checked for on Linux (see SERVER-13223 for example) it will not use sparse files, instead attempting to pre-allocate (fill with zeroes), and that will mean horrendous performance on a COW filesystem. Until that is fixed adding new data files will be a massive performance hit on ZFS (which you will be trying to do frequently with your writes). While you are not doing that performance should improve, but if you are adding data fast enough you may never recover between allocation hits.

Additionally, ZFS does not support Direct IO, so you will be copying data multiple times into memory (mmap, ARC, etc.) - I suspect that this is the source of your reads, but I would have to test to be sure. The last time I saw any testing with MongoDB/ZFS on Linux the performance was poor, even with the ARC on an SSD - ext4 and XFS were massively faster. ZFS might be viable for MongoDB production usage on Linux in the future, but it's not ready right now.

I had been using XFS for Mongo for a year because it had xfs_freeze which i could use to do LVM snapshots or consistent long mongodump, but now there is too much data. That is why i began searching and found ZFS was now stable on Linux, and had fast snapshots and differential sends, which was heaven
–
Alex FMar 26 '14 at 21:36

@adam-c What does "checked" mean "ZFS is not even checked" ? Do you simply mean that the Mongo server is testing on which filesystem its data files are located to choose an efficient storage mechanism, but this had not been implemented for ZFS on Linux ? (sorry, i am not a native english speaker and i find that ambiguous to understand)
–
Alex FMar 26 '14 at 21:47

setting. Here you are explicitly limiting the ARC to 2Gb, even though you have 16-32Gb. ZFS is extremely memory-hungry and zealous when it comes to the ARC. If you have non-ZFS replicas identical to ZFS replicas (HW RAID1 underneath), doing some maths yields

which means you are probably invalidating the whole ARC cache in 5 seconds time. ARC is (to some degree) "intelligent" and will try to retain both the most recently written blocks and the most used ones, so your ZFS volume may well be trying to provide you a decent data cache with the limited space it has. Try raising zfs_arc_max to half of your RAM (or even more) and using arc_shrink_shift to more aggressively evict ARC cache data.

Here you can find a 17-part blog reading for tuning and understanding ZFS filesystems.

Here you can find the ARC shrink shift setting explaination (first paragraph), which will allow you to reclaim more ARC RAM upon eviction and keep it under control.

I'm unsure of the reliability of the XFS on zvol solution. Even though ZFS is COW, XFS is not. Suppose that XFS is updating its metadata and the machine loses power. ZFS will read the last good copy of the data thanks to the COW feature, but XFS won't know of that change. Your XFS volume may remain "snapshotted" to the version before the power failure for an half, and to the version after power failure for the other (because it's not known to ZFS that all that 8Mb write has to be atomic and contains inodes only).

[EDIT] arc_shrink_shift and other parameters are available as module parameters for ZFSonlinux. Try

I didn't realize his ARC was set so low... But power protection is the easiest thing to account for in server solutions. It's a nonstarter here. The OP's systems appear to be co-located.
–
ewwhiteMar 28 '14 at 12:55

Just keep in mind my warning about ARC limit. It WILL use more memory than you give it.
–
KwaioMar 28 '14 at 13:40

I tend to limit ARC to 40% of physical RAM on linux hosts.
–
ewwhiteMar 28 '14 at 14:52

Unless you use deduplication or gigantic pools, ZFS is not especifically memory hungry. All file systems will use as much as RAM as they can to cache data, and this is good as unused RAM is wasted RAM. The difference is ZFS cache is not reported as filecache but as used. ZFS will release that memory should there is demand for it so this shouldn't be a problem unless huge demand occur ZFS cannot cope with fast enough.
–
jlliagreMar 28 '14 at 17:20

I will try to raise ARC to 12GB to see. MongoDB is also memory hungry
–
Alex FMar 30 '14 at 21:23