Thanks.
I'll give it a try, but because of the way my schedules are organized, I won't know if it worked until this time next month.
From: Bryn Hughes [mailto:linux@...]
Sent: 03 March 2015 16:14
To: bacula-users@...
Subject: Re: [Bacula-users] Job batches
On 2015-03-03 07:01 AM, Luc Van der Veken wrote:
If jobs are added in two batches at different times, does the oldest batch have to be completely finished before the newer one is started?
My backups are made fd -> file (disk) store -> copy to tape (an old LTO2 drive).
One client is huge compared to the others, copying it to tape takes some time (about 5 tapes).
My problem is that new incremental backups that should start while it is being copied, just sit there "waiting for execution" until the copy operation has completed - yet they are set to run at a higher priority, neither the storage they are to be written to nor any source fd is in use at the time, and the sd and director both have sufficiently high 'maximum concurrent jobs' settings.
I've gone over all configuration files several times to see if I haven't forgotten a 'maximum concurrent' or so, but I find no reason why those jobs shouldn't start.
PS: sorry if this is a repeat question, it sounds rather familiar while I am writing it, but I didn't find an older version. It's also possible that I started writing it a few months ago, but then decided not to post it and continue searching a bit more ;)
Do all of your jobs have the same priority setting? Jobs with different priorities won't execute at the same time.
Ah actually I see that you say above you're using different priority levels. Make everything the same priority and give it a try.
The priority thing is a little weird, it doesn't work quite the way one might expect. It won't kick off a higher priority job until the current running job has completed, so:
- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
Job 'B' won't execute until Job 'A' has completed.
Something like:
- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
- Job 'C' is queued with priority 10
- Job 'D' is queued with priority 5
Job 'A' will run until it is complete, then Job 'B' and 'D' will kick off at the same time assuming there's no concurrent limits exceeded, then Job 'C' will kick off once 'B' and 'D' are completed.

On 2015-03-03 07:01 AM, Luc Van der Veken wrote:
>
> If jobs are added in two batches at different times, does the oldest
> batch have to be completely finished before the newer one is started?
>
> My backups are made fd -> file (disk) store -> copy to tape (an old
> LTO2 drive).
>
> One client is huge compared to the others, copying it to tape takes
> some time (about 5 tapes).
>
> My problem is that new incremental backups that should start while it
> is being copied, just sit there “waiting for execution” until the copy
> operation has completed – yet they are set to run at a higher
> priority, neither the storage they are to be written to nor any source
> fd is in use at the time, and the sd and director both have
> sufficiently high ‘maximum concurrent jobs’ settings.
>
> I’ve gone over all configuration files several times to see if I
> haven’t forgotten a ‘maximum concurrent’ or so, but I find no reason
> why those jobs shouldn’t start.
>
> PS: sorry if this is a repeat question, it sounds rather familiar
> while I am writing it, but I didn’t find an older version.It’s also
> possible that I started writing it a few months ago, but then decided
> not to post it and continue searching a bit more ;)
>
>
Do all of your jobs have the same priority setting? Jobs with different
priorities won't execute at the same time.
Ah actually I see that you say above you're using different priority
levels. Make everything the same priority and give it a try.
The priority thing is a little weird, it doesn't work quite the way one
might expect. It won't kick off a higher priority job until the current
running job has completed, so:
- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
Job 'B' won't execute until Job 'A' has completed.
Something like:
- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
- Job 'C' is queued with priority 10
- Job 'D' is queued with priority 5
Job 'A' will run until it is complete, then Job 'B' and 'D' will kick
off at the same time assuming there's no concurrent limits exceeded,
then Job 'C' will kick off once 'B' and 'D' are completed.

If jobs are added in two batches at different times, does the oldest batch have to be completely finished before the newer one is started?
My backups are made fd -> file (disk) store -> copy to tape (an old LTO2 drive).
One client is huge compared to the others, copying it to tape takes some time (about 5 tapes).
My problem is that new incremental backups that should start while it is being copied, just sit there "waiting for execution" until the copy operation has completed - yet they are set to run at a higher priority, neither the storage they are to be written to nor any source fd is in use at the time, and the sd and director both have sufficiently high 'maximum concurrent jobs' settings.
I've gone over all configuration files several times to see if I haven't forgotten a 'maximum concurrent' or so, but I find no reason why those jobs shouldn't start.
PS: sorry if this is a repeat question, it sounds rather familiar while I am writing it, but I didn't find an older version. It's also possible that I started writing it a few months ago, but then decided not to post it and continue searching a bit more ;)

> On Mar 2, 2015, at 10:56 AM, Dan Langille <dan@...> wrote:
>
> On Mar 2, 2015, at 6:39 AM, Alan Brown <a.brown@...> wrote:
>>
>> On 14/02/15 23:26, Dan Langille wrote:
>>> This post came to my attention recently: https://lists.freebsd.org/pipermail/freebsd-scsi/2015-February/006581.html
>>>
>>> In short: "The primary focus of these changes is to modernize FreeBSD's
>>> tape infrastructure so that we can take advantage of some of the
>>> features of modern tape drives and allow support for LTFS."
>>>
>>> I don't know enough about these device to gauge the affects on the project.
>>
>> As far as I can tell ltfs support on linux sits on top of the standard mt-st stuff as a userspace (fuse) filesystem
>>
>> I'd hope it's much the same with BSD. Removing the standard interface would be counterproductive overall
>
> I don't know the details. The code has been committed to FreeBSD HEAD. I installed that on my test system here.
> I have run some tar tests and the code passes the basic btape test. Next step is Bacula jobs.
>
> Does this confirm that the standard interface remains?
I asked the developer: https://lists.freebsd.org/pipermail/freebsd-scsi/2015-March/006619.html
—
Dan Langille
http://langille.org/

On Mar 2, 2015, at 6:39 AM, Alan Brown <a.brown@...> wrote:
>
> On 14/02/15 23:26, Dan Langille wrote:
>> This post came to my attention recently: https://lists.freebsd.org/pipermail/freebsd-scsi/2015-February/006581.html
>>
>> In short: "The primary focus of these changes is to modernize FreeBSD's
>> tape infrastructure so that we can take advantage of some of the
>> features of modern tape drives and allow support for LTFS."
>>
>> I don't know enough about these device to gauge the affects on the project.
>
> As far as I can tell ltfs support on linux sits on top of the standard mt-st stuff as a userspace (fuse) filesystem
>
> I'd hope it's much the same with BSD. Removing the standard interface would be counterproductive overall
I don't know the details. The code has been committed to FreeBSD HEAD. I installed that on my test system here.
I have run some tar tests and the code passes the basic btape test. Next step is Bacula jobs.
Does this confirm that the standard interface remains?
—
Dan Langille
http://langille.org/

On 2/27/2015 6:53 PM, Heitor Faria wrote:
>
> Dear Bacula Users,
>
> 1. Why FIFO isn't more popular for database dumps backups? Is
> there any drawback?
> 2. I inserted RedFifo=yes on FileSet Options and configured a
> simple RunBeforeJob script (below) to test FIFO backup, but
> the backup job stalls waitng for script termination. Am I
> missing something?
>
>
> The FIFO special file created by mkfifo will block for read or
> write until both ends of the FIFO are opened. See man mkfifo(3).
>
> Thanks Josh, I'm aware. I think that's why ReadFifo option exists on
> Bacula:
The last line of your script is attempting to write to the FIFIO. Bacula
launches the script and waits for it to exit before continuing with the
job. So, your script is waiting on Bacula to open the FIFO for read, but
Bacula will never do so, because Bacula is waiting on the script to
exit. You cannot write to the FIFO from the RunBeforeJob script.
Instead, the script must launch a background process to write to the
FIFO and the script must exit so that Bacula can continue the job and
open the FIFO for reading.
>
> *"readfifo=yesno*If enabled, tells the Client to read the data on a
> backup and write the data on a restore to any FIFO (pipe) that is
> explicitly mentioned in the FileSet. In this case, you must have a
> program already running that writes into the FIFO for a backup or
> reads from the FIFO on a restore. This can be accomplished with the
> *RunBeforeJob* directive. If this is not the case, Bacula will hang
> indefinitely on reading/writing the FIFO. When this is not enabled
> (default), the Client simply saves the directory entry for the FIFO.
>
> Unfortunately, when Bacula runs a RunBeforeJob, it waits until that
> script terminates, and if the script accesses the FIFO to write into
> the it, the Bacula job will block and everything will stall. However,
> Vladimir Stavrinov as supplied tip that allows this feature to work
> correctly. He simply adds the following to the beginning of the
> RunBeforeJob script:
>
> exec > /dev/null"
> Ps.: I tried the exec > /dev/null
>
>
>
>
> [ -p /tmp/tubo ] ||mkfifo /tmp/tubo
> ls -l > /tmp/tubo
>
>
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel
> Website, sponsored
> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> things parallel software development, from weekly thought
> leadership blogs to
> news, videos, case studies, tutorials and more. Take a look and
> join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@...
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website, sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for all
> things parallel software development, from weekly thought leadership blogs to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@...
> https://lists.sourceforge.net/lists/listinfo/bacula-users

> > Dear Bacula Users,
>
> > 1. Why FIFO isn't more popular for database dumps backups? Is there any
> > drawback?
>
> > 2. I inserted RedFifo=yes on FileSet Options and configured a simple
> > RunBeforeJob script (below) to test FIFO backup, but the backup job stalls
> > waitng for script termination. Am I missing something?
>
> The FIFO special file created by mkfifo will block for read or write until
> both ends of the FIFO are opened. See man mkfifo(3).
Thanks Josh, I'm aware. I think that's why ReadFifo option exists on Bacula:
"readfifo=yesno If enabled, tells the Client to read the data on a backup and write the data on a restore to any FIFO (pipe) that is explicitly mentioned in the FileSet. In this case, you must have a program already running that writes into the FIFO for a backup or reads from the FIFO on a restore. This can be accomplished with the RunBeforeJob directive. If this is not the case, Bacula will hang indefinitely on reading/writing the FIFO. When this is not enabled (default), the Client simply saves the directory entry for the FIFO.
Unfortunately, when Bacula runs a RunBeforeJob, it waits until that script terminates, and if the script accesses the FIFO to write into the it, the Bacula job will block and everything will stall. However, Vladimir Stavrinov as supplied tip that allows this feature to work correctly. He simply adds the following to the beginning of the RunBeforeJob script:
exec > /dev/null"
Ps.: I tried the exec > /dev/null
> > [ -p /tmp/tubo ] ||mkfifo /tmp/tubo
>
> > ls -l > /tmp/tubo
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@...
> https://lists.sourceforge.net/lists/listinfo/bacula-users

On 02/27/2015 09:59 AM, Heitor Faria wrote:
>
>>> ... [I] ponder if it might be worthwhile to rethink the way I am
>>> doing backups.
>>
>> How much manual work is involved and how much is the time of whoever
>> does it worth? E.g. RDX might me cheaper. Or BackupPC.
>
> [2] Or hiring a VPS (e.g.: http://www.chicagovps.net) and install a bacula-sd on it. =)
I expect Amazon VTL is even better, but I don't think that gives you
physical DVDs.
As I understand it, sending BackupPC's "archive" job to a DVD is
trivial, except you still have to put the disk in the drive -- dep. on
how often you do that, RDX may be a better option.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

> For tapes each tape is a volume. Also recycling does not work if you
> only have 1 volume. This includes disk based storage.
Recycling does not work with a single volume in a pool because
recycling is an entire volume or none even on disk based storage.
John

> Pool may contain many volumes. What's the advantage of multiple volumes in a
> pool?
> Volume: is a label of a storage in a pool. What's the purpose of a volume?
> Why the Job resource has a pool and a storage attributes, it's supposed to
> have only the pool attributes!
I hope you can get some clarification on the 3rd lesson (it's open): https://www.udemy.com/bacula-backup-software/?dtcode=Gbsc4TG2tfWb
Regards,
==============================================================================
Heitor Medrado de Faria - LPIC-III | ITIL-F
02 a 13 de Março - Novo Treinamento Telepresencial Bacula: http://www.bacula.com.br/?p=2174
61 8268-4220
Site: http://www.bacula.com.br | Facebook: heitor.faria | Gtalk: heitorfaria@...
===============================================================================

> > All post-bacula handling (write to DVD, off-siting, moving
> > storage locations around to fit varying storage needs/capacities,
> > etc) are done by hand or with the help of scripts
>
> > ... [I] ponder if it might be worthwhile to rethink the way I am
> > doing backups.
>
> How much manual work is involved and how much is the time of whoever
> does it worth? E.g. RDX might me cheaper. Or BackupPC.
[2] Or hiring a VPS (e.g.: http://www.chicagovps.net) and install a bacula-sd on it. =)
> Dimitri
>
>