Since all device jobs now use FD anyway, if a plugin wants to wait for the device jobs to complete al it has to do is poll gui.jobs_manager.has_device_jobs periodically.

The issue is not 'wants to wait', it is 'it must wait'.

The problem is that even with FD, job sequences are not locked. It frequently happens that the done function of job 1 schedules job 2, and that the semantics *require* that job two run before anything else does. This is true for the sequences upload_books -> add_books_to_metadata -> upload_booklists and get_device_information -> info_read.

Consider meme's sequence. He kicks off the collections job and then immediately queues a 'books' job. If the collections job triggers a sequence, his books job will run in the middle of it because it is already queued. The inserted books job messes with the booklists, changes internal data structures and other things. When the next job of the sequence runs, it could erase all of this or get confused.

I suppose that I am really saying that a job sequence easily could need to be atomic. There are several like that in the device connection and book upload logic. Currently they are not atomic, and we cannot guarantee the results.

One way to fix this would be to put jobs queued by an FD at the head of the queue. I don't see quite how to accomplish that. Another would be to have job sequences set some kind of lock that the last job in the sequence removes, but this has problems with failure. Another way would be to eliminate job sequencing, instead having one job function sequence the bits manually using an FD. This might be the best approach for internal calibre device jobs.