Stats

What this does is 'inotifywait' runs in daemon mode and outputs to stdout whenever the watched file is modified. This is how I am sending messages from the 'pup_event_backend_d' background daemon. The backend runs from startup of Puppy, the frontend only runs when X is running.

The problem is, I found that sometimes if I quit X then restart X, inotifywait fails to restart, with a message "Couldn't initialize inotify" on stderr.

The virtues of the inotify mechanism are extolled extensively all over the Internet, but this problem is a show-stopper for me. A quick google got few hits. I have posted a query to the inotify-tools mail-list, asking how can I stop then restart inotifywait reliably.

If it turns out to not be solvable, I will have to turn to some other means of sending messages between the backend and frontend daemons. A fifo buffer is required. The frontend daemon must be able to quit and when restarted pickup the same fifo interface.

Posted on 30 Jun 2008, 8:55

Comments:

Posted on 31 Jun 2008, 2:49 by DougalA SolutionThere's something that can be done to solve that, albeit a little awkwardly:
Say you use a fifo, /tmp/tmpfifo, to transfer the data between the backend
and frontend.
In the backend script, just before you echo your info into the fifo, put:
until pidof X >/dev/null ; do sleep 1 ; done

This way it will wait for X (and hopefully, the frontend) before using
the pipe...
The only problem here is that if you're not using X, it will delay all
event handling by the backend, since it will just keep waiting on that
first iteration...

This can be solved in an even more awkward way: using an intermediate script:
-------------------------
#!/bin/sh
INPIPE=/tmp/tmpfifo1
OUTPIPE=/tmp/tmpfifo2
[ -p $INPIPE ] || mkfifo $INPIPE
[ -p $OUTPIPE ] || mkfifo $OUTPIPE
while read ALINE ; do
until pidof X >/dev/null ; do sleep 1 ; done
echo "$ALINE"
done<$INPIPE >$OUTPIPE
-------------------------
(Where /tmp/tmpfifo1 is the one the backend feeds, while /tmp/tmpfifo2
is the one the frontend reads from.)
This way the shell running the intermediate script will buffer everything
and you won't get any broken pipes...
(the only problem is if you never run X, this might take up memory...
you could add a timeout for it -- then just make sure the intermediate
is running if X is ever started)

Note that the loop from the intermediate script can actually be inserted
into the backend (being fed by the "done" of your big loop), thus saving
the need for two fifos, but that would make it harder for you to add
debug messages etc...