Comments

Inject progress report in percentage into the block live stream. This
can be read out and displayed easily on restore.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
Changes in v2:
- Print banner only if there is really some block device to restore
block-migration.c | 30 ++++++++++++++++++++++--------
1 files changed, 22 insertions(+), 8 deletions(-)

On 1 déc. 2009, at 15:20, Jan Kiszka wrote:
> Inject progress report in percentage into the block live stream. This> can be read out and displayed easily on restore.
I guess that this patch only reports percentage for the initial bulk copy of the image.
I haven't tested this scenario, but the next phase, sending dirty blocks, can be quite long too if the guest does a lot of I/O.
Won't it give a wrong impression to the user when qemu says "Completed 100%" but disk migration continues catching up for a while?

Pierre Riteau wrote:
> On 1 déc. 2009, at 15:20, Jan Kiszka wrote:> >> Inject progress report in percentage into the block live stream. This>> can be read out and displayed easily on restore.> > > I guess that this patch only reports percentage for the initial bulk copy of the image.> I haven't tested this scenario, but the next phase, sending dirty blocks, can be quite long too if the guest does a lot of I/O.> Won't it give a wrong impression to the user when qemu says "Completed 100%" but disk migration continues catching up for a while?
I does give a wrong impression (as there is also a wrong behavior) ATM.
But the plan is to update the number of pending blocks during the sync.
Theoretically we could even go backwards with this progress value if
(much) more blocks become dirty than we are able to write over a certain
period.
Effectively, the total disk sizes increases during the migration due to
dirty blocks being added. Instead of carrying this updated number over
to the receiving side, I want to let the sender do the calculation and
only transfer the result inside the stream (as this is only about
visualization).
Jan
PS: Think I just found the e1000 migration issue, which turned out to
affect all NICs.