I'm running backuppc as a vm on xen. I use both vmware and xen for =
virtualization and I can tell you truthfully that xen vm's are quite a bit =
faster than the vmware ones. I was a little wary of this setup so I've =
really been hitting it hard and the performance is better than expected ( =
200gb in 3 hours). =20
=20
>>> <jmyers@...> 02/16/06 2:42 pm >>>=20
Guys,
I have recently been working on my BackupPC machine to try to =
get=20
backups running faster. I have discovered that during a backup the=20
BackupPC_dump processes takes about as much of the processor as it can=20
get. As a result I chucked a faster processor in to the machine (from a=20
1.7Ghz P4 to a 3.2Ghz P4) and noticed that in some cases the backups =
were=20
now cut down to almost half the time. So following on from that I am=20
considering getting a dual core processor (And a new motherboard to =
suit)=20
to improve the speed of the backup's even further. (I have already =
dropped=20
the compression down to 1, think that is most likely the reason behind=20
it).
Anyhow, with such a powerful processor I would hate to think of it just=20
sitting there when I'm not backing up doing nothing. I was considering=20
running the box up as a Windoze box and chucking VMware on it (The new=20
Free server product), giving it a few dedicated drive devices for the=20
backup pool, but keeping the base os on a virtual disk.... This would =
make=20
the OS a bit more portable and "Backup- able" so if I wanted to put it on =
a=20
different machine I could just copy the vmware image across. However I =
am=20
a little concerned that I will negate the entire benefit of the dual =
core=20
processor just be putting it on a VM. I know I could use VMWare for =
linux=20
and do it that way, but the main benefit I see in doing this is that I =
can=20
easily backup the virtual machine and once I have BackupPC (And Nagios =
on=20
this particular box) up and running then I won't have to worry about=20
rebuilding them again (Which I have had to do a few times due to some=20
hardware issues, and swapping drives around etc). I was just wondering=20
what other ppl's thoughts were about doing this out there?
The new VMWare version supports a gigabit virtual nic, which is previously=
=20
was a 10mb nic, that's what stopped me doing it earlier......=20

On Fri, 2006-02-10 at 04:27, Herman Bos wrote:
> One of our machines gets backups over the internet, but the uplink in
> question there is quite limited. Incremental backups are not a problem.
> But full backups will takes 4 days. :o
>
> I was wondering if its possible to let backuppc make the full backups by
> combining the last full backup and an incremental backup? The data is
> there after all.
If you use rsync as the transport you'll only transfer the
changes anyway. Note that with rsync the changes since the
last full are transferred each time so if you want to save
bandwidth you'll want frequent full runs.
--
Les Mikesell
les@...

On Wed, Feb 15, 2006 at 10:45:50AM -0800, Justin Best wrote:
> > I was wondering if its possible to let backuppc make the full backups by
> > combining the last full backup and an incremental backup? The data is
> > there after all.
>
> As far as I'm aware, this is the default behavior. If you look at one of the
> incremental backups through the web interface, it will appear 'filled', so
> where data was skipped in an incremental backup, it will show up as if it
> were backed up anyhow. Because, as you said, the data IS there.
>
> However, there are a couple of caveats with incremental backups. Mainly,
> because the incremental determination is simply comparing 'modified' times
> on the files themselves, it's somewhat risky to assume that an 'incremental'
> backup will ALWAYS get you EVERYTHING you want. Thus, it's important to
> occasionally do a full backup as well.
>
> Of course, I'd recommend that you read the documentation for BackupPC on
> this topic yourself. It's possible that I'm misinterpreting and that the
> above information isn't quite correct.
You're misinterpreting the question. He wants backuppc to make
a new "full" by making what other backup software calls a "synthetic"
full backup. This would be the basis for the next set of incrementals.
I too would like this feature, since some clients I would like to back up
are teleworkers with limited upstream bandwidth.
Whatever risk exists of losing something due to the incremental
algorithm would still be there. I believe that this risk is minimal
or nonexistent, at least with unix oses, but YMMV and I can't really
speak to the risks with windows. Some filesystems or unix variants
might support not updating file modification times in the filesystem,
but I've never run into one.
An option for what he's looking for would be to do a real rsync
of the full filesystem, rather than using rsync only as the transport
mechanism for backuppc. I don't know how well it could be integrated
with backuppc. Maybe do a restore to a staging disk, rsync the backup
client vs. the restore, then push the updated local copy into backuppc
as the next full.
danno
--
dan pritts - systems administrator - internet2
734/352-4953 office 734/834-7224 mobile

On Fri, 2006-02-17 at 11:40, Travis Wu wrote:
> I am using rsync xfer and wondering if I can just directly use the
> data without doing the restore first?
The backuppc archive copy is highly compressed so you
can't use it directly.
> The scenario is that I want to make a backup server for our production
> server. In case of the production server goes down, hopefully the data
> is ready on the backup server so users will be able to use it right
> away.
>
> I am thinking to do rsync (just the command) with the production
> server first and then use backuppc to backup the local copy of the
> dataset. The only downside is that I'll need twice as much space as
> the original dataset.
Yes, that approach should work with a few additional considerations
about how you will handle accessing the copy from the backup
machine (add the ip address, make the users aware of a different
server or something else) and about matching logins and
authentication on the machines. If the downtime while you
restored would be expensive, this could be worthwhile and
continuing to let backuppc work lets you have older versions
still available if needed.
--
Les Mikesell
les@...

Hi,
I am using rsync xfer and wondering if I can just directly use the data
without doing the restore first?
The scenario is that I want to make a backup server for our production
server. In case of the production server goes down, hopefully the data is
ready on the backup server so users will be able to use it right away.
I am thinking to do rsync (just the command) with the production server
first and then use backuppc to backup the local copy of the dataset. The
only downside is that I'll need twice as much space as the original
dataset.
anyone?
Travis

Hi all,
I have two questions that I would be glad to have answered:
1. What is meant by pool exactly? Is this referring to all previous backups?
Is this reffering to files that are common between computers?
2. I have seen on the list archives some emails about link errors and it
seems that .../log/LOG shows lots of link errors. I have confirmed that my
cpool and host dirs are on two different file systems, well cpool is on '/'
(/dev/hda2) and the host dirs are on a software raid setup (so /dev/md0). It
seems that since BackupPC was setup on our systems some 2/3 years ago, we
were getting these link errors but all machines were being backed up fine.
So, my question is or questions are: What are the implications of having the
link errors? Am I duplicating identical data and using up unnecessary disk
space? What is the purpose of cpool?
So much for two questions...
Any clarification on this whole issue will be greatly appreciated.
Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013

I've just been searching for backup methods and came across backuppc, which
was described as easy to set up.
well I've install it on Debian, and its up and running, i've added a remote
machine to the hosts file, and it's just done a local machine backup.
But
I'm getting access denied errors on the local machine backup
I can't find where to change the backup location, i want it on a raid5 drive
and not on my operating system drive.
It says it's putting them in /var/lib/backuppc/pc/localhost/0 but i want
them on an md drive.
On the remote machine, can i just select specific folders to back up or is
it all or nothing.
Is there a 'simple get up and running' document anywhere ?
many thanks
Ken

I was running about 3 firewire drives in a jbod config for a while, but
just started to have some problems with then on boot, where one was a
little dodgy and then linux wouldn 't reboot complaining that one drive
was missing..... But on the whole it worked fine for me.... And this would
mostlikly be a Firewire800 device also.....
Les Mikesell <les@...>
17/02/2006 04:10 PM
To
jmyers@...
cc
backuppc-users@...
Subject
Re: [BackupPC-users] Thoughts: Running BackupPC on a Virtual Machine
On Thu, 2006-02-16 at 22:46, jmyers@... wrote:
> Les,
> After a bit of thought I worked out a slightly different way
> of doing it... I could use Linux as the base os, running vmware,
> running a Linux Virtual machine..... The whoe reason is that on my
> Linux box, when installing it I have to set up linux (Not that hard),
> Configure the Gigabit adaptor (A bit touchy at times), install webmin,
> install nagios, install monarch (To configure Nagios), install
> backuppc, configure apache for authenticate for Backuppc and Nagios
> and Monarch, setup vnc server so I can vnc to it.... all up it can
> take about a whole day to re0build the box. I also want to get a Lacie
> Biggest disk (2TB) firewire drive for the pool, and having this as a
> vm would alow me to move it to a different machine REALLY REALLY
> easily. I could also copy the vm to the Lacie drive as a backup incase
> the whoe machine when caput!
I like the idea but I'm not sure I'd trust a firewire drive
as the main archive. I've been trying for a couple of
years to make a firewire drive work as a RAID1 mirror
along with a matching internal IDE and while it works well
enough to sync a copy when the machine isn't busy, if
I leave the mirror active during backups it will have
errors that kick it out of the raid or crash the machine.
--
Les Mikesell
les@...
------------------------------------------------------------
Mail was checked for spam by the Freeware Edition of No Spam Today!
The Freeware Edition is free for personal and non-commercial use.
You can remove this notice by purchasing a full license! To order
or to find out more please visit: http://www.no-spam-today.com

Hi,
On Friday 17 February 2006 07:53, Craig Barratt wrote:
> David Brown writes:
> > I've been using backuppc for several days, and I really like the concept
> > behind it. The web interface is very helpful. However, I'm having a
> > very hard time figuring out what to store the backup filesystem on.
> >
> > I've tried both XFS and ReiserFS, and both have utterly abysmal
> > performance in the backup tree. The problem has to do with the
> > hardlinked trees.
> >
> > - Most filesystems optimize directories by using inodes that are stored
> > near one another for files in the same directory. This allows access
> > to files in the same directory to be localized on the disk.
I've tried a lot of filesystems with backuppc and I've run across the same
things you have. I've stuck with reiserfs (version 3) because it was the
"least of all evils" (that's quite a literal translation from a Dutch
proverb, I hope you understand what I mean). Ext3 bogged down completely when
the amount of files started to get larger (no dir_index), JFS was also slow
and had a memory leak when I tried it, XFS worked ok, but then I had trouble
with my hardware raid and had to rebuild the filesystem using the xfs repair
tools and that just didn't work. No such experience yet with reiser, so I
stuck with that.
That being said: as you are still testing if I understand your mail correctly,
could you do me a favor and do a test with ext3 with dir_index and -T news?
Dir_index doesn't provide you with an advantage in the general case (if I
read the benchmarks published all over the internet correctly), but it may
work here. I sadly don't have enough spare hardware to build a serious
testmachine. I would much rather use ext3 if I can than a "special"
filesystem, for all kinds of reasons.
I would also like to see how reiser 4 performs, but as far as I know, that's
still in a state of flux (and still not added to the standard kernel source),
so I'm a bit reluctant to let it have control of my backups. But if someone
has experience with it on backuppc, please tell me about it :)
> > - BackupPC creates the files in the backup directory, and then
> > hardlinks them, by hash, into the pool. This means that each of the
> > entries in a pool directory has an inode (and data) on a diverse part of
> > the disk. Just statting the files in a pool directory is very slow. 'du'
> > of the pool directory takes several hours on any filesystem I've tried it
> > on.
I don't think ext3 with dir_index will be a "miracle fs", but I'm rather
curious how it behaves in this situation.
> > - Other than the first backup directory, the backup directories aren't
> > much better, since most of the files are hardlinks back to the pool.
>
> You're exactly right. A major performance limitation of BackupPC
> is that backup directories tend to have widely dispersed inodes.
> Yes, just stat()ing files in a single directory involves lots of
> disk seeks.
>
> A custom BackupPCd client is being developed, and once it is
> ready I'm curious to see if sorting readdir contents by inode
> number on the server will help the performance.
It worked wonders for the nightly runs. I used to run the version made by
someone who ordered the files by inode and that worked fine. I only stopped
using it because I had to tinker with it every time backuppc was upgraded and
the newer versions of backuppc don't have to process the whole (c)pool in one
go. So I let backuppc only do a small portion each night, which solves my
problem just the same.
Regards,
Guus Houtzager

Hi,
On Friday 17 February 2006 07:53, Craig Barratt wrote:
> David Brown writes:
> > I've been using backuppc for several days, and I really like the concept
> > behind it. The web interface is very helpful. However, I'm having a
> > very hard time figuring out what to store the backup filesystem on.
> >
> > I've tried both XFS and ReiserFS, and both have utterly abysmal
> > performance in the backup tree. The problem has to do with the
> > hardlinked trees.
> >
> > - Most filesystems optimize directories by using inodes that are stored
> > near one another for files in the same directory. This allows access
> > to files in the same directory to be localized on the disk.
I've tried a lot of filesystems with backuppc and I've run across the same
things you have. I've stuck with reiserfs (version 3) because it was the
"least of all evils" (that's quite a literal translation from a Dutch
proverb, I hope you understand what I mean). Ext3 bogged down completely when
the amount of files started to get larger (no dir_index), JFS was also slow
and had a memory leak when I tried it, XFS worked ok, but then I had trouble
with my hardware raid and had to rebuild the filesystem using the xfs repair
tools and that just didn't work. No such experience yet with reiser, so I
stuck with that.
That being said: as you are still testing if I understand your mail correctly,
could you do me a favor and do a test with ext3 with dir_index and -T news?
Dir_index doesn't provide you with an advantage in the general case (if I
read the benchmarks published all over the internet correctly), but it may
work here. I sadly don't have enough spare hardware to build a serious
testmachine. I would much rather use ext3 if I can than a "special"
filesystem, for all kinds of reasons.
I would also like to see how reiser 4 performs, but as far as I know, that's
still in a state of flux (and still not added to the standard kernel source),
so I'm a bit reluctant to let it have control of my backups. But if someone
has experience with it on backuppc, please tell me about it :)
> > - BackupPC creates the files in the backup directory, and then
> > hardlinks them, by hash, into the pool. This means that each of the
> > entries in a pool directory has an inode (and data) on a diverse part of
> > the disk. Just statting the files in a pool directory is very slow. 'du'
> > of the pool directory takes several hours on any filesystem I've tried it
> > on.
I don't think ext3 with dir_index will be a "miracle fs", but I'm rather
curious how it behaves in this situation.
> > - Other than the first backup directory, the backup directories aren't
> > much better, since most of the files are hardlinks back to the pool.
>
> You're exactly right. A major performance limitation of BackupPC
> is that backup directories tend to have widely dispersed inodes.
> Yes, just stat()ing files in a single directory involves lots of
> disk seeks.
>
> A custom BackupPCd client is being developed, and once it is
> ready I'm curious to see if sorting readdir contents by inode
> number on the server will help the performance.
It worked wonders for the nightly runs. I used to run the version made by
someone who ordered the files by inode and that worked fine. I only stopped
using it because I had to tinker with it every time backuppc was upgraded and
the newer versions of backuppc don't have to process the whole (c)pool in one
go. So I let backuppc only do a small portion each night, which solves my
problem just the same.
Regards,
Guus Houtzager

On Thu, Feb 16, 2006 at 10:53:46PM -0800, Craig Barratt wrote:
> The biggest issue is maintaining accurate reference counts so you
> know when to delete unused pool files. The hardlink structure is
> using the file system to maintain reference counts.
>
> There has been some consideration of using an RDBMS to maintain
> the reference counts, but no benchmarking has been done. The
> table sizes will grow to be very large, and it's hard to imagine
> that any less disk seeks will be needed by the RDBMS since it
> certainly won't fit in memory.
>
> But the concept is worthy of further consideration.
You might look at Bacula which stores similar information in an RDBMS. One
possible advantage is that the database can be stored on a different
drive than the backup pool.
I would guess with proper indexes performance could be made adequate.
Dave

David Brown writes:
> I've been using backuppc for several days, and I really like the concept
> behind it. The web interface is very helpful. However, I'm having a very
> hard time figuring out what to store the backup filesystem on.
>
> I've tried both XFS and ReiserFS, and both have utterly abysmal performance
> in the backup tree. The problem has to do with the hardlinked trees.
>
> - Most filesystems optimize directories by using inodes that are stored
> near one another for files in the same directory. This allows access
> to files in the same directory to be localized on the disk.
>
> - BackupPC creates the files in the backup directory, and then hardlinks
> them, by hash, into the pool. This means that each of the entries in
> a pool directory has an inode (and data) on a diverse part of the disk.
> Just statting the files in a pool directory is very slow. 'du' of the
> pool directory takes several hours on any filesystem I've tried it on.
>
> - Other than the first backup directory, the backup directories aren't
> much better, since most of the files are hardlinks back to the pool.
You're exactly right. A major performance limitation of BackupPC
is that backup directories tend to have widely dispersed inodes.
Yes, just stat()ing files in a single directory involves lots of
disk seeks.
A custom BackupPCd client is being developed, and once it is
ready I'm curious to see if sorting readdir contents by inode
number on the server will help the performance.
> So my question is twofold:
>
> - Is anyone aware of a Linux filesystem that can handle this kind of
> usage behavior without massive thrashing?
>
> - How difficult would it be to change the way that backups are done?
> Instead of hardlinking everything, keep the backup trees as a virtual
> concept. The result could be stored either in some kind of database,
> or even just a series of indexed flat files. If properly built and
> indexed, these should be searchable just as easily as the tree. In
> fact, the browser for restore can't look at the trees exclusively,
> anyway, because of incremental backups.
>
> If the pool files were created in the proper place initially (which,
> BTW, means that they can't be created first, and then moved into place.
> The checksum has to be known before the file is even initially
> created).
>
> I guess I'll spend some time studying the code to see if this kind of
> concept is even plausable with the current code.
The biggest issue is maintaining accurate reference counts so you
know when to delete unused pool files. The hardlink structure is
using the file system to maintain reference counts.
There has been some consideration of using an RDBMS to maintain
the reference counts, but no benchmarking has been done. The
table sizes will grow to be very large, and it's hard to imagine
that any less disk seeks will be needed by the RDBMS since it
certainly won't fit in memory.
But the concept is worthy of further consideration.
Craig

ROBERTO MORENO writes:
> I have been using Backuppc for a while and everything is great
> but last time i check for my job the backup numbers were mission
> on the web front end. On the back end everything is still there.
> For some reason the old backups start at 11, 12, 13 and so on.
> The new backups start at 1, 0 .
>
> Anybody know whats causing this. Also this is only happening with 2 of
> my hosts.
It sounds like your pc/HOST/backups file got trashed, perhaps
because your disk was full. Check if the pc/HOST/backups.old
file has useful information (although this is unlikely since
it appears several backups have happened since the problem
occurred.
The CVS 3.x version has significant improvements in this area.
All such files (eg: backups, config.pl, restores) are written and
verified before renaming them, rather than naming away the old
version and writing the new version as in 2.x. Also, a utility
is included that can reconstruct a trashed backups file. That
utility should work on 2.x backups (although it works better with
3.x backups since extra meta data is saved to make reconstructing
the backups file more reliable). You could try it if you want -
although I caution you that I haven't tested it on 2.x backups,
and you will probably need to install 3.x CVS in a new directory
to use it. It's called BackupPC_fixupBackupSummary.
Craig

On Thu, 2006-02-16 at 22:08, Stephen Vaughan wrote:
> Does anyone know if it is possible to push a backup from a client TO
> backuppc? I've got several boxes running rsyncd and backuppc calls
> them to send data back and forth. I have a box that is firewalled and
> I want to be able to still backup this machine, but in the other
> direction. So the client machine makes a connection to backuppc and
> tells it what to do.. something along those lines.
There is nothing built in to work that way but you might
establish a vpn connection from the client to the server
with something like openvpn at backup time or do some
tricky port-forwarding over an ssh connection.
--
Les Mikesell
les@...

On Thu, 2006-02-16 at 22:46, jmyers@... wrote:
> Les,
> After a bit of thought I worked out a slightly different way
> of doing it... I could use Linux as the base os, running vmware,
> running a Linux Virtual machine..... The whoe reason is that on my
> Linux box, when installing it I have to set up linux (Not that hard),
> Configure the Gigabit adaptor (A bit touchy at times), install webmin,
> install nagios, install monarch (To configure Nagios), install
> backuppc, configure apache for authenticate for Backuppc and Nagios
> and Monarch, setup vnc server so I can vnc to it.... all up it can
> take about a whole day to re0build the box. I also want to get a Lacie
> Biggest disk (2TB) firewire drive for the pool, and having this as a
> vm would alow me to move it to a different machine REALLY REALLY
> easily. I could also copy the vm to the Lacie drive as a backup incase
> the whoe machine when caput!
I like the idea but I'm not sure I'd trust a firewire drive
as the main archive. I've been trying for a couple of
years to make a firewire drive work as a RAID1 mirror
along with a matching internal IDE and while it works well
enough to sync a copy when the machine isn't busy, if
I leave the mirror active during backups it will have
errors that kick it out of the raid or crash the machine.
--
Les Mikesell
les@...

Les,
After a bit of thought I worked out a slightly different way of
doing it... I could use Linux as the base os, running vmware, running a
Linux Virtual machine..... The whoe reason is that on my Linux box, when
installing it I have to set up linux (Not that hard), Configure the
Gigabit adaptor (A bit touchy at times), install webmin, install nagios,
install monarch (To configure Nagios), install backuppc, configure apache
for authenticate for Backuppc and Nagios and Monarch, setup vnc server so
I can vnc to it.... all up it can take about a whole day to re0build the
box. I also want to get a Lacie Biggest disk (2TB) firewire drive for the
pool, and having this as a vm would alow me to move it to a different
machine REALLY REALLY easily. I could also copy the vm to the Lacie drive
as a backup incase the whoe machine when caput!
What do you think?
Jamie
Les Mikesell <les@...>
17/02/2006 02:19 PM
To
jmyers@...
cc
backuppc-users@...
Subject
Re: [BackupPC-users] Thoughts: Running BackupPC on a Virtual Machine
On Thu, 2006-02-16 at 15:42, jmyers@... wrote:
> Anyhow, with such a powerful processor I would hate to think of it
> just sitting there when I'm not backing up doing nothing.
The obvious thing to do is to use Linux as your desktop
OS in the daytime when the backups are idle...
> I was considering running the box up as a Windoze box and chucking
> VMware on it (The new Free server product), giving it a few dedicated
> drive devices for the backup pool, but keeping the base os on a
> virtual disk.... This would make the OS a bit more portable and
> "Backup-able" so if I wanted to put it on a different machine I could
> just copy the vmware image across.
I'm not sure I see any advantage to having windows in the
picture, and installing Linux on a new box is fairly
trivial.
> However I am a little concerned that I will negate the entire benefit
> of the dual core processor just be putting it on a VM. I know I could
> use VMWare for linux and do it that way, but the main benefit I see in
> doing this is that I can easily backup the virtual machine and once I
> have BackupPC (And Nagios on this particular box) up and running then
> I won't have to worry about rebuilding them again (Which I have had to
> do a few times due to some hardware issues, and swapping drives around
> etc). I was just wondering what other ppl's thoughts were about doing
> this out there?
A really interesting concept would be to keep the pool on
a virtual drive. There is bound to be some overhead but
it might solve the problem of being able to copy the
archive quickly by allowing you to shut the vm down
and rsync files containing it to an offsite location.
--
Les Mikesell
lesmikesell@...
------------------------------------------------------------
Mail was checked for spam by the Freeware Edition of No Spam Today!
The Freeware Edition is free for personal and non-commercial use.
You can remove this notice by purchasing a full license! To order
or to find out more please visit: http://www.no-spam-today.com

Does anyone know if it is possible to push a backup from a client TO
backuppc? I've got several boxes running rsyncd and backuppc calls them to
send data back and forth. I have a box that is firewalled and I want to be
able to still backup this machine, but in the other direction. So the clien=
t
machine makes a connection to backuppc and tells it what to do.. something
along those lines.
--
Best Regards,
Stephen

On Thu, 2006-02-16 at 15:42, jmyers@... wrote:
> Anyhow, with such a powerful processor I would hate to think of it
> just sitting there when I'm not backing up doing nothing.
The obvious thing to do is to use Linux as your desktop
OS in the daytime when the backups are idle...
> I was considering running the box up as a Windoze box and chucking
> VMware on it (The new Free server product), giving it a few dedicated
> drive devices for the backup pool, but keeping the base os on a
> virtual disk.... This would make the OS a bit more portable and
> "Backup-able" so if I wanted to put it on a different machine I could
> just copy the vmware image across.
I'm not sure I see any advantage to having windows in the
picture, and installing Linux on a new box is fairly
trivial.
> However I am a little concerned that I will negate the entire benefit
> of the dual core processor just be putting it on a VM. I know I could
> use VMWare for linux and do it that way, but the main benefit I see in
> doing this is that I can easily backup the virtual machine and once I
> have BackupPC (And Nagios on this particular box) up and running then
> I won't have to worry about rebuilding them again (Which I have had to
> do a few times due to some hardware issues, and swapping drives around
> etc). I was just wondering what other ppl's thoughts were about doing
> this out there?
A really interesting concept would be to keep the pool on
a virtual drive. There is bound to be some overhead but
it might solve the problem of being able to copy the
archive quickly by allowing you to shut the vm down
and rsync files containing it to an offsite location.
--
Les Mikesell
lesmikesell@...