Ben M. wrote:
> Neil Aggarwal wrote:
>>> Ben:
>>>>>>> Are "centralized" database (SQL) servers best left out of
>>> virtualization
>>>>> ...
>>>>> and intensive querying, Accounting Systems and so forth.
>>>>> It really depends on the hardware you allocate to the VM
>> and how intensive the usage is.
>>>> Personally, if I have an intensively queried database
>> server, I want it directly on hardware.
>>>>Ditto.
At my $work_place we tried to move our main database server into XEN
VM, and while it kinda worked,
we had to revert it to a traditional "hardware+os+db" setup, mostly
because backups and maintenance operations
started to cause performance problems with regular OLTP-like queries
-- a situation that had not happen when we were running
non-virtualized setup.
The reasons for virtulaization were that it simplifies administration,
backups and disaster recovery of database servers.
(setups: "virtual": dell server, local scsi raid, Cent OS 5.x
Dom0, CentOS 5.x DomU, disk as LVM volume of Dom0, + PostgreSQL 8.3,
database VM is the only VM that dom0 runs)
"normal": dell server, local scsi raid, Cent OS 5.x +
PostgreSQL 8.3)
>> Neil
>>>> Neil:
>> What if it were the only "real" active vm? I know that might sound a bit
> of a waste, but I am really enjoying the backup and duplication
> abilities of running in a Xen hypervisor as well as its other features.
> It seems to be saving me a lot of time in production settings. And there
> is also a comfort level in uniformity on a LAN.
>> Would there still be a significant hit on resource performance by the
> hypervisor if running that database server alone in it, or alongside a
> few rarely used, lightweight or spurious vms? I am talking about the
> database activities running during the biz day and backups, batches and
> other maintenance in the off hours. Nothing urgent here, just trying to
> plan out the future, mull over the possibilities and where to head.
>>There are no significant hits in performance, except in one area:
IO. If your database VMs will have visualized storage -- then there
will be IO overhead,
it will be most noticeable in "mixed IO load" scenarios, like "lot of
random IO requests + 1 or 2 sequential IO streams", i.e. a common
situation when OLTP database generates
random IO requests, and a backup process generates a steam of sequential
read requests (or re-index process, or cluster operation, etc)
Taken alone neither random nor sequential IO presents a performance
problem, but the mixture of thereof does. The reason is that XEN
effectively serializes all VM's IO in a single kernel thread of the
Dom0. The situation, I think, can be avoided if a VM has given direct
access to a storage controller as a PCI device, but I have not tested
such setup.
Details: For each block device inside VM xen hypervisor creates a
kernel thread (named blkback.<device id>.xvd ). All IO requests that
VM generates to this device
are passed via buffers inside single kernel thread which creates IO
starvation and latency problems. (I think only pointers are passed, not
the data, but still it does not change the situation) This old article
in linux journal does a good job explaining the nature of those problems
in plain terms: http://www.linuxjournal.com/article/6931
I've run a number of tests to confirm this theory: 2 setups "normal"
and "virtual" (see above) perform roughly the same (normal being ~10%
faster) on purely transactional or heavily-sequential workloads
(transactional: pgbench tests, sequentional: pg_dump or re-index
process) , but taken together (pgbench test + pg_bump in parallel)
normal setup turns out to be ~50% faster. During the tests in dom0 you
can observe "blkback" kernel thread sitting in IO-wait mode and in DomU
you see "disk 100% busy" in atop.
Regards,
-Konstantin
--
Konstantin Antselovich
mailto: konstantin at antrselovich.com
> - Ben
>> _______________________________________________
> CentOS-virt mailing list
>CentOS-virt at centos.org>http://lists.centos.org/mailman/listinfo/centos-virt>