Comments

Leave the disable options for now to help with testing but these will be removed
once we're confident in the thread implementations.
Disabled code bit rots. These have been in tree long enough that we need to
either commit to making them work or just remove them entirely.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>

On Mon, Jan 24, 2011 at 04:28:48PM -0600, Anthony Liguori wrote:
> On 01/24/2011 03:00 PM, Anthony Liguori wrote:> > Leave the disable options for now to help with testing but these will be removed> > once we're confident in the thread implementations.> >> > Disabled code bit rots. These have been in tree long enough that we need to> > either commit to making them work or just remove them entirely.> > > > I/O thread disables icount apparently.> > I'm not really sure why. Marcelo, do you know the reason > qemu_calculate_timeout returns a fixed value in the I/O thread > regardless of icount?
Hi,
The following commit hopefully fixed that issue.
commit 225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c
Author: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Date: Sun Jan 23 04:44:51 2011 +0100
Avoid deadlock whith iothread and icount
When using the iothread together with icount, make sure the
qemu_icount counter makes forward progress when the vcpu is
idle to avoid deadlocks.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
See http://lists.gnu.org/archive/html/qemu-devel/2011-01/msg01602.html
for more info.
One more thing I didn't mention on the email-thread or on IRC is
that last time I checked, qemu with io-thread was performing
significantly slower than non io-thread builds. That was with
TCG emulation (not kvm). Somewhere between 5 - 10% slower, IIRC.
Also, although -icount & iothread no longer deadlocks, icount
still sometimes performs incredibly slow with the io-thread (compared
to non-io-thread qemu). In particular when not using -icount auto but
a fixed ticks per insn values. Sometimes it's so slow I thought it
actually deadlocked, but no it was crawling :) I haven't had time
to look at it any closer but I hope to do soon.
These issues should be fixable though, so I'm not arguing against
enabling it per default. Just mentioning what I've seen FYI..
Cheers

On Tue, Jan 25, 2011 at 10:17:41AM +0100, Edgar E. Iglesias wrote:
> On Mon, Jan 24, 2011 at 04:28:48PM -0600, Anthony Liguori wrote:> > On 01/24/2011 03:00 PM, Anthony Liguori wrote:> > > Leave the disable options for now to help with testing but these will be removed> > > once we're confident in the thread implementations.> > >> > > Disabled code bit rots. These have been in tree long enough that we need to> > > either commit to making them work or just remove them entirely.> > > > > > > I/O thread disables icount apparently.> > > > I'm not really sure why. Marcelo, do you know the reason > > qemu_calculate_timeout returns a fixed value in the I/O thread > > regardless of icount?> > Hi,> > The following commit hopefully fixed that issue.> > commit 225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c> Author: Edgar E. Iglesias <edgar.iglesias@gmail.com>> Date: Sun Jan 23 04:44:51 2011 +0100> > Avoid deadlock whith iothread and icount> > When using the iothread together with icount, make sure the> qemu_icount counter makes forward progress when the vcpu is> idle to avoid deadlocks.> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>> > See http://lists.gnu.org/archive/html/qemu-devel/2011-01/msg01602.html> for more info.> > One more thing I didn't mention on the email-thread or on IRC is> that last time I checked, qemu with io-thread was performing> significantly slower than non io-thread builds. That was with> TCG emulation (not kvm). Somewhere between 5 - 10% slower, IIRC.> > Also, although -icount & iothread no longer deadlocks, icount> still sometimes performs incredibly slow with the io-thread (compared> to non-io-thread qemu). In particular when not using -icount auto but> a fixed ticks per insn values. Sometimes it's so slow I thought it> actually deadlocked, but no it was crawling :) I haven't had time> to look at it any closer but I hope to do soon.> > These issues should be fixable though, so I'm not arguing against> enabling it per default. Just mentioning what I've seen FYI..
Right, remember seeing 20% added overhead for network copy with TCG on
the initial iothread merge. One can argue its due to the added overhead
of locking/signalling between vcpu and iothread, where neither can run
in parallel (while in kvm mode they can). But its just handwaving until
detailed information is gathered on specific cases...