It did this again, same symptom.
Bacula fills up 83 volume files via vchanger, then when it /should/
run out of room and require another 'magazine' (usb disk), what it
does instead is prune/recycle the first volume it wrote, and the next
N volumes that it only filled to FULL a few hours ago.
Really confused about this, any other ideas? All the retention periods
on the volumes are 2 weeks.
Could it be recycling these volumes early because the wholly and only
contain Copy 'C' jobs?
On 27 September 2013 11:19, Gary Cowell <gary.cowell@...> wrote:
> They all say 2 weeks (in seconds) now, so I'll see what happens next
> time this runs.
>
> On 26 September 2013 16:49, John Drescher <drescherjm@...> wrote:
>>
>>
>>
>> On Thu, Sep 26, 2013 at 10:48 AM, Gary Cowell <gary.cowell@...> wrote:
>>>
>>> I don't know to be honest. What I do know is, I've just done that now,
>>> so we'll see what happens this weekend.
>>
>>
>> use the bconsole command list volume
>>
>> look at the volretention column of your volumes. This is in seconds so you
>> will most likely want to convert it to days.
>>
>> John

Sample schedule is missing weekly and montly pools. Each pools retention
time is different. Daily pool expires after a week. Weekly pool expires
after 4 weeks, monthly pool expires after 12 months. Sometimes it is
necessary to restore previous backups not the latest. This way I have a
wide range of options to restore.
But i'm not sureif itis better by including weekly and montly pool media
to daily pool and set the expiration time to 365 daysand only backup to
daily pool. If you want to restore a 6 months old backup, how do you
configureyour schedule?
On 03/08/13 07:06, Gary Dale wrote:
> On 02/08/13 05:56 PM, Süleyman Kuran wrote:
>> I export the Offsite media on Friday morning from the library and move
>> it to another location. Because there is no monthly backup in the
>> library I also schedule a job a day after the offsite backupin case I
>> need to recover from a montly backup immediately. 1stsat is always 1
>> day after the 1st friday, or am i wrong?
>>
>> You are right that I have two full backups on sunday, it is a mistake.
>> I should remove daily backup on sundays.
>>
> The really strange thing is that you are running a full backup to the
> Daily pool at 3:17 each morning and an incremental each day except
> Sunday at 19:05 to the Weekly pool. Assuming normal business hours, that
> means you are doing an incremental followed by a full each day. Do you
> need two backups each day?
>
> The sample schedule looks pretty good and appears to do what you want:
>
> Schedule {
> Name = "ScheduleTest"
> Run = Full 1st sun at 23:05
> Run = Differential 2nd-5th sun at 23:05
> Run = Incremental mon-sat at 23:05
> }
>
> In fact, you could even simplify it to running a full every Sunday and
> incrementals in between.
>
> Schedule {
> Name = "ScheduleTest2"
> Run = Full sun at 23:05
> Run = Incremental mon-sat at 23:05
> }
>
> If you put the incrementals into a different pool from the
> full/differential then you keep the size of the pool you want offsite
> manageable.
>
> If you are backing up the catalogue (a common practise) along with the
> files, then you don't need to run a full backup to an Offsite pool. I
> just copy the Weekly pool to a removable drive/media each week after the
> weekly Full or Differential backup finishes. This can be done as a
> RunAfterJob script or as a cron job. Should you need to restore the
> pool, just copy it back to your disk drive.
>
>
>
> ------------------------------------------------------------------------------
> Get your SQL database under version control now!
> Version control is standard for application code, but databases havent
> caught up. So what steps can you take to put your SQL databases under
> version control? Why should you start doing it? Read more to find out.
> http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@...
> https://lists.sourceforge.net/lists/listinfo/bacula-users

>>>>> On Mon, 30 Sep 2013 00:07:00 -0700, bdelagree said:
>
> Hi everyone!
>
> The DataSpooling has not changed my backup. (See the end of this post)
> 1day and 14hours for 390Gb .... :(
>
> By cons I just saw that on Friday I restarted only StorageDaemon, was it also restart Director and FileDaemon?
You need to restart the Director (or at least use the reload command) to
change spooling (the log you posted was not using spooling).
__Martin

> The DataSpooling has not changed my backup. (See the end of this post)
> 1day and 14hours for 390Gb .... :(
>
> By cons I just saw that on Friday I restarted only StorageDaemon, was it
> also restart Director and FileDaemon?
>
> Do you think that enabling compression could improve backup when there are
> many small files?
>
>
I would expect adding software compression to slow the backup down.
Does your source raid array have a cache? Reading many small files causes a
lot of seek operations.
John

Hi,
There are already few threads for same issue but didn't get satisfied
answer from any of them. So I am asking it again more specifically. I
am getting below message for one of my Job which is in waiting state
from a long time.
backup-sd JobId 20: Please mount Volume "A00045L4" or label a new one for:
A fresh and labeled volume is in the pool in append state and it is
already in tape drive.
I tried to remount it using using unmount and mount command. But
system is giving below error while mounting post loading volume in
drive.
"3001 OK mount requested. Specified slot ignored. Device="Drive1"
Please tell me why it is happening . why bacula is asking to mount
this volume and to add new one . while this volume is already
available and how to resolve this problem completely.
Thanks & Regards,

Hi All,
I am doing some restore tests and able to run tape/disk based restores
successfully. I see that all the files show up in the right place but is
there a way to show that the actual restore and the backup are the
same/consistent?
How do you guys show that your backup was successful, Is there something in
the output or can be included in the output like a checksum or something
similar?
*Thanks,*
*
*
*Sean Shergill*
*SysAdmin*
*Atlassian - SF*

Quoting Radosław Korzeniewski <radoslaw@...>:
> Hello,
>
> 2013/9/24 Deepak <deepak@...>
>
>> Hi,
>>
>> While setting up recycled volumes I was facing some issues which I have
>> resolved with help of you guys.
>>
>> I have three volumes. I have written to 1st and 2nd volueme these are
>> marked with full status. now bacula just took a backup of 200 GB on 3rd
>> volume. After that backup I executed a restore job and restored a few files
>> successfully. Post this restoration I scheduled again a job. Which should
>> write on 3rd volume as per my understanding and it has done the same. but
>> while locating end of data it gave me some error and marked volume status
>> as error and recycled 1st volume.
>>
>> for Recycling part it is doing perfectly. But while locating end of data
>> it is not doing good. I have faced this issue second time in same scenario.
>>
>> Error i received in log files are :
>>
>> 24-Sep 12:26 backup-dir JobId 30: Start Backup JobId 30,
>> Job=CONFIG.2013-09-24_12.26.**00_54
>> 24-Sep 12:26 backup-dir JobId 30: Using Device "Tape-0"
>> 24-Sep 12:27 backup-sd JobId 30: Volume "A00041L4" previously written,
>> moving to end
>> of data.
>> 24-Sep 12:46 backup-sd JobId 30: Error: Unable to position to end of data
>> on device
>> "Tape-0" (/dev/lin_tape/IBMtape0): ERR=dev.c:1208 read error on "Tape-0"
>> (/dev/lin_tape/IBMtape0). ERR=Input/output error.
>>
>> 24-Sep 12:46 backup-sd JobId 30: Marking Volume "A00041L4" in Error in
>> Catalog.
>> 24-Sep 12:46 backup-sd JobId 30: 3307 Issuing autochanger "unload slot 2,
>> drive 0"
>> command.
>> 24-Sep 12:48 backup-dir JobId 30: There are no more Jobs associated with
>> Volume
>> "A00042L4". Marking it purged.
>> 24-Sep 12:48 backup-dir JobId 30: All records pruned from Volume
>> "A00042L4"; marking
>> it "Purged"
>> 24-Sep 12:48 backup-dir JobId 30: Recycled volume "A00042L4"
>> 24-Sep 12:48 backup-sd JobId 30: 3304 Issuing autochanger "load slot 3,
>> drive 0"
>> command.
>> 24-Sep 12:48 backup-sd JobId 30: 3305 Autochanger "load slot 3, drive 0",
>> status is OK.
>> 24-Sep 12:48 backup-sd JobId 30: Recycled volume "A00042L4" on device
>> "Tape-0"
>>
>>
> The cause of this problem is lin_tape driver. I had the same issue on a
> number of implementations. Did you checked your tape drive setup with btape
> utility before first backup? What was the result of the test?
>
>
>>
>> It has recycled volume A00042L4 but marked as error on A00041L4.for more
>> information I am using lin_tape driver for tape drive communication with
>> IBM tape.
>>
>> Please tell me how to resolve this issue.
>>
>>
> Change 'lin_tape' to 'st' driver. It solve all your issues.
>
Hi,
I have removed lin_tape and configured system with 'st' I think it is
working fine except one thing which was working fine with lin_tape my
hba card failover with its configuration under
/etc/modprobe.d/lin_tape.conf
with below perameters.
options lin_tape alternate_pathing=1
options lin_tape tape_reserve_type=persistent
options lin_tape lin_tape_debug=1
post disabling lin_tape it is not working with st by default. Can you
give me some idea how to configure hba multipathing failover in
rhel6.4 with 'st' driver.
> best regards
> --
> Radosław Korzeniewski
> radoslaw@...
>

On Fri, 27 Sep 2013 16:30:20 +0530
Deepak <deepak@...> wrote:
[...]
> how it is possible to write 850 GB of data on a tape with 800 GB
> capacity?
LTO provides for hardware compression which depends on the
compressibiliy of the data being written on the cartridge (as an
anecdote from my $dayjob where I'm putting .wav files to LTO-4
cartridges, a single cartridge usually records up to 1TB of data insead
of bare 800GB).
That's why it's usually advised to turn software comression off in
Bacula and rely on the compression done by LTO drive.

Hi,
I am using Bacula from some time to take backup on my IBM LTO4 tape
library with 4 tape drives.
I want to configure a verify to verify my backups and cleaning job for
tape drive cleaning.please tell me:
1) how can I configure verify job for backup verification after
running backup jobs.
2) how can I configure cleaning jobs to automatically clean tape
drives using bacula
3) how often should I run this cleaning job to clean tape drives in a
clean environment.
Thanks,

They all say 2 weeks (in seconds) now, so I'll see what happens next
time this runs.
On 26 September 2013 16:49, John Drescher <drescherjm@...> wrote:
>
>
>
> On Thu, Sep 26, 2013 at 10:48 AM, Gary Cowell <gary.cowell@...> wrote:
>>
>> I don't know to be honest. What I do know is, I've just done that now,
>> so we'll see what happens this weekend.
>
>
> use the bconsole command list volume
>
> look at the volretention column of your volumes. This is in seconds so you
> will most likely want to convert it to days.
>
> John

Just for you information, here are the modifications:
For the NFS server I created two jobs, one for system and another one for the directory that contains the millions of files.
I created the directory /var/lib/spool/drive0 and /var/lib/spool/drive1
I then did a chown-R bacula: bacula /var/lib/spool
In the sections Device of bacula-sd.conf I added these options:
Device {
Name = Drive-0
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 3gb
Spool Directory = /var/lib/bacula/spool/drive0
}
Device {
Name = Drive-1
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 3gb
Spool Directory = /var/lib/bacula/spool/drive1
}
+----------------------------------------------------------------------
|This was sent by supervision@... via Backup Central.
|Forward SPAM to abuse@...
+----------------------------------------------------------------------

Hi everyone!
Sorry for my short absence but I've been busy with other little problem.
I had to create a virtual machine under OS9 for one of my users
I had forgotten how the old system was very basic !
: p
Finally tonight is my monthly Full Backup.
I wish to change my jobs and set up the DataSpooling.
I will give you the result Monday
Thank you for your help
+----------------------------------------------------------------------
|This was sent by supervision@... via Backup Central.
|Forward SPAM to abuse@...
+----------------------------------------------------------------------

On Thu, Sep 26, 2013 at 10:48 AM, Gary Cowell <gary.cowell@...> wrote:
> I don't know to be honest. What I do know is, I've just done that now,
> so we'll see what happens this weekend.
use the bconsole command list volume
look at the volretention column of your volumes. This is in seconds so you
will most likely want to convert it to days.
John

Hi,
Sorry for the delay, but I'm now trying to reproduce the problem and I
get other problems. ASAP I can get back to the original error, I'll let
you know.
Regards,
On 25/09/13 03:21, Radosław Korzeniewski wrote:
> Hello,
>
> 2013/9/24 Juan Pablo Lorier <jplorier@...>:
>> Hi,
>>
>> I'm using bacula 5.0.3 on Centos 4 and I had to add recently a couple of
>> new filesets and though I reload the director config they don't get into
>> the database and thus if I run a job using those filesets, I get an error.
> What error did you get? Can I assume you've got a "Syntax error"?
>
>> Can anyone give me a help with this?
> Sure.
>
> best regards

I don't know to be honest. What I do know is, I've just done that now,
so we'll see what happens this weekend.
On 26 September 2013 13:40, John Drescher <drescherjm@...> wrote:
>
>
> ---------- Forwarded message ----------
> From: John Drescher <drescherjm@...>
> Date: Thu, Sep 26, 2013 at 8:39 AM
> Subject: Re: [Bacula-users] Copy Job recycling volumes too quickly
> To: Gary Cowell <gary.cowell@...>
>
>
>
>> I have a copy job which copies my full backups to a USB drive using
>> vchanger once a week.
>>
>> Recently, the backups have been exceeding the 1TB size of a single USB
>> drive, which would be fine, I'll just use more than one drive.
>>
>> What isn't fine is that bacula is recycling its volumes too quickly.
>> What I expect to happen is that it will fill up the USB drive, (run
>> out of volumes) then wait for another USB drive to be attached,
>> instead of recycling/purging the volumes it just wrote. Here's a log
>> snippet:
>
>
> Did you change the retention period after you created volumes? If so did you
> use the bconsole commands to apply the new pool settings to the existing
> volumes?
>
>
> John
>
>
>
> --
> John M. Drescher
>
> ------------------------------------------------------------------------------
> October Webinars: Code for Performance
> Free Intel webinars can help you accelerate application performance.
> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most
> from
> the latest Intel processors and coprocessors. See abstracts and register >
> http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@...
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

On 9/25/2013 9:58 PM, James Harper wrote:
> My backup regime is:
> . Full backups Friday and Saturday night (spread over two nights because too much data to do in one night)
> . Incrementals 3 times a day every other day
> . Every Sunday-Thursday night a virtual full + catalog backup to USB disk for offsite
> . After the virtualfull, purge the offsite volume
>
> The offsite disk will only ever be used in the case of a total loss of the backup server, so a catalog restore will be required then anyway, so purging the volume isn't a problem.
I use both full and virtual fulls as well. I have laptops that are hit
or miss on completing their real full, so I too run a virtual full on
everything to get a set of jobs that (at the time) require no
incrementals and are written to USB drives. At that point, the volume
files on those USB devices are copied to another set of USB drives along
with a text dump of the catalog, and the second set is taken offsite. I
use 3 sets of USB drives rotated through connected, onsite firesafe, and
offsite location. Some will not be allowed to use this method due to the
disk-to-disk copy, but it does keep the catalog consistent with
available onsite media for restores. The offsite drives are for
emergency use only, since they require restoring the catalog from the
text dump.

---------- Forwarded message ----------
From: John Drescher <drescherjm@...>
Date: Thu, Sep 26, 2013 at 8:39 AM
Subject: Re: [Bacula-users] Copy Job recycling volumes too quickly
To: Gary Cowell <gary.cowell@...>
I have a copy job which copies my full backups to a USB drive using
> vchanger once a week.
>
> Recently, the backups have been exceeding the 1TB size of a single USB
> drive, which would be fine, I'll just use more than one drive.
>
> What isn't fine is that bacula is recycling its volumes too quickly.
> What I expect to happen is that it will fill up the USB drive, (run
> out of volumes) then wait for another USB drive to be attached,
> instead of recycling/purging the volumes it just wrote. Here's a log
> snippet:
Did you change the retention period after you created volumes? If so did
you use the bconsole commands to apply the new pool settings to the
existing volumes?
John
--
John M. Drescher

Hello,
2013/9/25 Deepak <deepak@...>
> Hi Guys,
>
> I want single email for all my successful jobs for a complete day and
>
It is not possible with current Bacula messages routine.
The solution: use external tool for messages aggregation or prepare a daily
report from logs stored in database.
best regards
--
Radosław Korzeniewski
radoslaw@...

>
> I am struggling to find a method of keeping consistent off-site without
> breaking easy restores.
>
> My planned schedule was as follows:
>
> First Sunday of Month: Full Backup to Disk
> Monday-Saturday: Incremental Backup to Disk
> Friday (after the incremental): Virtual Full Backup to Tape
> Subsequent Sundays in Month: Differential Backup to Disk
>
> Keeping 3 months (or more depending on rate of change and how well
> compression works, which are currently unknowns) on disk for restores,
> and the Friday virtual full going off-Site in case of disaster.
>
> The problem I ran into (well not actually still in testing, and not
> production), is on the Second Friday of the month, the virtual full
> fails, because it can't find the previous virtual full to build the new
> one from. How do I make it go back to the actual full instead of the
> previous Virtual Full?
>
> Will I be stuck creating a virtual full to another disk pool, then
> running a copy job for off-site backups, which unfortunately greatly
> reduces the history of on disk data I can keep for restores.
>
> Or is there some method I have yet to discover that will allow me to
> mark the tape volume unavailable so that it ignores that job on the
> subsequent virtual fulls. I have tried playing with setting the enabled
> status to disabled, and updating the volstatus parameter, without
> getting anywhere.
>
My backup regime is:
. Full backups Friday and Saturday night (spread over two nights because too much data to do in one night)
. Incrementals 3 times a day every other day
. Every Sunday-Thursday night a virtual full + catalog backup to USB disk for offsite
. After the virtualfull, purge the offsite volume
The offsite disk will only ever be used in the case of a total loss of the backup server, so a catalog restore will be required then anyway, so purging the volume isn't a problem.
I originally modified Bacula so that it could exclude the virtual full medium, but that was a bit of a hack.
My post-catalog backup script does the purging. I have one director doing backups for two sites so there are actually two offsite usb disks (one for each site).
I use autofs for mounting the usb disk automatically. It gets mounted on /backup/offsite. The sd's are completely separate machines to the director too, hence the need for the scp.
The script I use follows this email (some stuff redacted).
James
#!/bin/sh
/etc/bacula/scripts/delete_catalog_backup
/usr/bin/mysql --skip-column-names bacula -ubacula -p<password> <<EOF |
SELECT DISTINCT VolumeName
FROM Job
JOIN Pool
ON Job.PoolId = Pool.PoolId
JOIN JobMedia
ON Job.JobId = JobMedia.JobId
JOIN Media
ON JobMedia.MediaId = Media.MediaId
WHERE Pool.Name IN ('site1-offsite', 'site2-offsite');
EOF
while read media
do
echo Purging $media
echo "purge volume=$media" | /usr/bin/bconsole >/dev/null 2>/dev/null
done
# copy catalog bsr to usb too
scp /var/lib/bacula/BackupCatalog.bsr site1-sd-server:/backup/offsite/BackupCatalog.bsr

I am struggling to find a method of keeping consistent off-site without
breaking easy restores.
My planned schedule was as follows:
First Sunday of Month: Full Backup to Disk
Monday-Saturday: Incremental Backup to Disk
Friday (after the incremental): Virtual Full Backup to Tape
Subsequent Sundays in Month: Differential Backup to Disk
Keeping 3 months (or more depending on rate of change and how well
compression works, which are currently unknowns) on disk for restores,
and the Friday virtual full going off-Site in case of disaster.
The problem I ran into (well not actually still in testing, and not
production), is on the Second Friday of the month, the virtual full
fails, because it can't find the previous virtual full to build the new
one from. How do I make it go back to the actual full instead of the
previous Virtual Full?
Will I be stuck creating a virtual full to another disk pool, then
running a copy job for off-site backups, which unfortunately greatly
reduces the history of on disk data I can keep for restores.
Or is there some method I have yet to discover that will allow me to
mark the tape volume unavailable so that it ignores that job on the
subsequent virtual fulls. I have tried playing with setting the enabled
status to disabled, and updating the volstatus parameter, without
getting anywhere.
--
Thanks,
Dean E. Weimer
http://www.dweimer.net/