Thanks for the recomendation. I'll give it a try. Do you think the VRD commercial cuts are "better" than the comskip/comcut that kmttg uses (assuming the quickstream fix part runs first)?

comskip is generally better at Ad Detection and can be fine tuned for particular shows using custom "comskip.ini" files. Neither one is perfect and personally I skip Ad Detect and just manually edit out commercials with VRD myself - quicker and more accurate than any automated approach. There are options is kmttg to bring up VRD automatically to either review detected commercials before the cut stage or just to bring up VRD so you can do the commercial cuts manually (this is what I do) before proceeding to next steps. If you have VRD as part of the flow then it will be used for the actual commercial cutting step instead of mencoder.

First, props to moyekj for kmttg, it is exactly what I need to automate the process of managing TiVo recordings and archives. When I started using it, one thing I found was that, at least on my system, running only one task at a time is not optimal, but running multiple tasks at once is also not always ideal. It just depends on what the concurrent tasks are. The ability to set the active job limit is nice, but I wanted to take it a step further.

Before I go any farther, I just want to say I hope I am not stepping on any toes with this post as I really like this tool, and as a programmer, I know this sort of thing is one of the most valuable kinds of contributions a person can make in return for such a nicely done free tool. If I am doing anything wrong by posting these patches in this manner, let me know and I'll do it differently next time. Now, back to the patches...

For instance, I have a quad core CPU, and I find that it makes the most sense to have encodes utilize all 4 CPUs. Having said that, there is no reason to run more than one encode process at a time, as that will use almost all of the processor. But, the other side of it is that I also want to be making some progress on processing the downloads, decrypting, removing ads, etc. In fact, since the encode takes longer than all these other tasks combined, I want it to put some priority on presenting me with all the Video Redo reviews early, so that I can go to bed and know that it will stay busy encoding all night and not wait for me to intervene.

So, I want to run only one encode task at a time, and I want to run no more than one other disk-intensive task at a time so that I do not overburden the hard drive. I want as many Video Redo Reviews up front as possible, but keep encodes going as continuously as possible.

Finally, I want to be able to save my current job stack in case I need to interrupt processing for some reason (like for a reboot). This may not be a problem when you are all caught up, but since I just started using this tool, I had a lot of old tivo files to convert. I got tired of needing to restart all my tasks whenever I had to reboot or needed to have my processor back for awhile to do some work.

So, I have added the following features to my local checkout:

- Restrict to 1 Encode job at a time (the most CPU-heavy of them all)
- Restrict to 1 disk-intensive job at a time
- Do not allow decrypt jobs to start when 2 other intensive jobs are running and there are more qsfix jobs than decrypt jobs in the queue (Optimization for vrdreview tasks to be done earlier)
- Do not allow adcut jobs to start when 2 encode jobs are stacked at the top of the queue, unless no other tasks can be run (Optimization for vrdreview tasks to be done earlier)
- Added a "File->Toggle Launching New Jobs" option to stop new jobs from starting. This allows for a graceful shutdown of jobs so the Job Stack can be saved for later reload.
- Added a "File->Save Job Stack" option to save (serialize) the current job stack to a file called jobStack.dat. This operation cannot be done when jobs are currently active since the active states of those tasks cannot be meaningfully serialized and you would need to clean up any mess left by aborting them.
- Added a "File->Load Job Stack" option to load (deserialize) a saved job stack from jobStack.dat. This cannot be done when jobs are already running or queued on the stack to prevent mistakes.

I am running this on my system with active job limit = 4, encoding cpu cores = 4, and VideoRedo setup to do qsfix, adscan, and adcut. I have not extensively tested it when running as a service or on platforms other than Windows. So, I am not saying these patches should necessarily all be integrated into the release software. That depends on if there is any demand for them beyond myself. If not, I am free to run with my patches to my heart's content, but I'll have to merge in any future updates.

These are more like suggestions based on my experiences using the app. I am sure that my preferences for how these processes are prioritized could differ depending on the exact hardware I am using, but I suspect that my configuration is probably similar to many other systems. It would be nice if these adjustments had more configurability, but I just wanted to get it working as a proof of concept for myself. Perhaps some config options for some of the hard coded numbers would make it more useful for others. Also, the load/save jobs option should probably present the user with a file selection dialog.

I have attached the diffs between my patched source code and svn revision 1078. If any of these ideas are interesting enough to you guys, please feel free to add them and/or modify them as you see fit. If there are any questions, don't hesitate to ask.

Good stuff!
If "Save Job Stack" and "Load Job Stack" could be manipulated programmatically, I'd be all over it. Since it's a GUI toggle, I don't think I'd be able to make too much use of it. Using a batch job and Window's Task Scheduler, I bounce the kmttg task every noon and midnight to clear the log at (semi) regular times, so having the Job Stack functions would be great to incorporate in the shutdown process. As it is now, I use parsed output from Window's Tasklist to determine if kmttg has any jobs running. If idle, Taskkill is used to stop kmttg. If not idle, the batch sleeps for "x" seconds then checks Tasklist output again. Repeat until kmttg is idle. Sometimes it HOURS before kmttg is idle...

I'd think that maybe the server should provide an RPC or SOAP service. Through that, you can have it run commands such as the jobMonitor.saveJobs() or jobMonitor.loadJobs() which I added to that class.

The system can still be handled as a service if necessary, but I haven't thought long about how it would be called outside of the GUI, I just haven't made it that far along in the thought process. I'm hoping that what I have already done is enough to get a conversation going about how these ideas may or may not be needed and if so what rules they should really have.

I'd think that maybe the server should provide an RPC or SOAP service. Through that, you can have it run commands such as the jobMonitor.saveJobs() or jobMonitor.loadJobs() which I added to that class.

The system can still be handled as a service if necessary, but I haven't thought long about how it would be called outside of the GUI, I just haven't made it that far along in the thought process. I'm hoping that what I have already done is enough to get a conversation going about how these ideas may or may not be needed and if so what rules they should really have.

Knowing next to nothing about programming in java, would java.io.Console be another alternative? I'm reading up on it now!

- Restrict to 1 Encode job at a time (the most CPU-heavy of them all)
- Restrict to 1 disk-intensive job at a time
- Do not allow decrypt jobs to start when 2 other intensive jobs are running and there are more qsfix jobs than decrypt jobs in the queue (Optimization for vrdreview tasks to be done earlier)
- Do not allow adcut jobs to start when 2 encode jobs are stacked at the top of the queue, unless no other tasks can be run (Optimization for vrdreview tasks to be done earlier)
- Added a "File->Toggle Launching New Jobs" option to stop new jobs from starting. This allows for a graceful shutdown of jobs so the Job Stack can be saved for later reload.
- Added a "File->Save Job Stack" option to save (serialize) the current job stack to a file called jobStack.dat. This operation cannot be done when jobs are currently active since the active states of those tasks cannot be meaningfully serialized and you would need to clean up any mess left by aborting them.
- Added a "File->Load Job Stack" option to load (deserialize) a saved job stack from jobStack.dat. This cannot be done when jobs are already running or queued on the stack to prevent mistakes.
...

The ability to save and load job stack I thought was pretty interesting. I incorporated your changes related to that into latest SVN with a few changes:
1. Save and load restricts to queued jobs only and I used different method names to reflect the fact that they are for queued jobs only.
2. Changed "Toggle Launching New Jobs" File menu item to a toggle menu item entitled "Do not launch queued jobs".
3. Changed the other menu item names to Save/Load queued jobs

Please try out version from SVN to see if the save/load capability still meets your intent. There are still some things to ponder over:
1. Could very well be that some queued jobs will no longer be relevant when loaded. For example for a queued download whose source is deleted from TiVo, or an encode job whose source file is now missing, etc. Of course these and subsequent dependent jobs will then fail.
2. Currently I still do prevent loading saved jobs if there are any active or queued jobs in place, but since the load is for queued jobs only perhaps that is not really necessary.
3. Should kmttg remove saved jobs file once loaded? For now it does not and my tendency is say no, but then you can have a very old file lying around if you don't remove it.
4. Ability to specify a file name to save/load? My feeling is that is overkill and having a single fixed file name is probably sufficient for most people.

As for your other changes related to job management I did not incorporate those as those are tailored more to individual needs. If there are others that are interested in that additional level of management perhaps a new config setting where it can be enabled/disabled can be wrapped around it all.

So far so good. I updated, recompiled, and it is doing what I want. I like the way it now saves only the queued jobs because I actually hit a bug with the previous way this was working where it could not serialize a backgroundProcess even though I could see no running jobs. That prevented me from saving and so I had to lose the list. I don't know what was going on with that, but now that shouldn't occur.

As for the dialog box, I agree it is probably overkill to have more than one jobData.dat file, for the reason that the file is really a use-once throw-away file. It has little reuse value. For that same reason, deleting the file might also be a good idea just to keep them from being left laying around. The only downside of course is if for some reason you need to use it again, but I suppose manually copying the file is an option in such a case. I could go either way.

Yes, loading a file that was saved earlier when jobs have been run after the fact will cause some entries in the file to be irrelevant. Loading that file will then result in errors as it hits things that do not exist or whatnot. I ran into that myself but am not sure what can be done about it except maybe deleting the jobData.dat file when jobs are running and requiring that the jobData.dat file be saved only after allowing all running jobs to complete.

Theoretically, loading the file with other jobs in the queue would just result in the jobs from the file being appended to the current stack, which isn't necessarily a bad thing, so I agree we could take that restriction out. I had originally thought I would serialize the stack and reload the stack itself, and so the load operation would be destructive, but then settled on serializing one job at a time.

I decided to throw orangeboy a bone on the idea of programatically controlling these commands. For continuity, I have gone ahead and folded my job management code back into the new version and I added a checkbox to the configuration dialog to enable/disable it. I have attached a new diff file which also includes the new code for orangeboy's idea.

To summarize the programmatic job control feature:
* When a file called control.dat is created, the jobMonitor will pick it up and execute commands from the file, one command per line. When complete, control.dat is deleted.

* When disableNewJobs is in effect, an empty file called idle.dat is created when the running job queue becomes empty. This file is deleted when jobs run again.

orangeboy, let me know if this might work.

Thanks, moyekj for adding the job enable/disable/load/save features. I'm glad I could make a useful contribution. Yes, I agree the job management features are perhaps a bit too specific to my system, but at any rate, I went ahead and produced the new diffs and cleaned it up with a config option in case anyone else is interested in this later.

...I decided to throw orangeboy a bone on the idea of programatically controlling these commands. For continuity, I have gone ahead and folded my job management code back into the new version and I added a checkbox to the configuration dialog to enable/disable it. I have attached a new diff file which also includes the new code for orangeboy's idea.

To summarize the programmatic job control feature:
* When a file called control.dat is created, the jobMonitor will pick it up and execute commands from the file, one command per line. When complete, control.dat is deleted.

* When disableNewJobs is in effect, an empty file called idle.dat is created when the running job queue becomes empty. This file is deleted when jobs run again.

orangeboy, let me know if this might work.

Thanks, moyekj for adding the job enable/disable/load/save features. I'm glad I could make a useful contribution. Yes, I agree the job management features are perhaps a bit too specific to my system, but at any rate, I went ahead and produced the new diffs and cleaned it up with a config option in case anyone else is interested in this later.

Thanks for the effort - You made this old dog pretty happy!
I applied some of the changes to jobMonitor.java and gui.java and compiled. I still have 15 jobs in queue, so I can't replace kmttg.jar just yet. I look forward to testing it out!

Speaking of contributions, I wrote a little Window's batch file to compare my successful "download" jobs with my successful "pushed" jobs. It uses diff from DiffUtils for Windows to produce the final "side by side" file. It makes it nice to see visually what jobs (potentially) didn't make it to the final step. This bit of code is actually part of a larger batch that I run as a custom command, but figured it could be of use for some of the folks here!

2300+ posts to read through, wow!!. Can someone give be a quick round up as to what i need to download to get all of this to work with commercial skip. I am using a WIndows 98 pro laptop.

You're in luck! I just setup both pyTivo and kmttg at my Bro-In-Law's house last night.

kmttg is pretty straight forward. There's a great wiki with installation instructions for kmttg here: http://code.google.com/p/kmttg/w/list, as well as subsequent configuration instructions.

It's also recommended to incorporate VideoReDo Plus to "help" the process along. It can execute a "Quick Stream Fix" to smooth out any video glitches, and also be used to do the actual commercial editing in place of the supplied mencoder. A free 15 day trial version of VideoReDo can be found here: http://www.videoredo.com/en/index.htm.

I mentioned pyTivo earlier because it's a great app to get the commercial-cut program back to the TiVo for viewing. Awesome instructions for setting up pyTivo here: "Single Page of Install Instructions".

And also, if your active job limit is above 1, and you have auto transfers set to loop in GUI, and also have VRD review enabled, you'll find that while there are active jobs, it will not "loop in GUI" but rather waits until all of the active jobs are done before re-looping.

If its possible to have it continue looping, so if I have multiple jobs set to auto transfer, and am looping AT in the GUI, and it downloads 1 show, runs VRD and then brings up VRD for review, it will be able to check and start downloading and processing a 2nd video while its waiting for me to review the first video, instead of sitting idle while there are still more shows to download.

That means an older version of VideoRedo without support for SetFilterDimensions is being run. If you have more than 1 version of VideoRedo installed note that the last version you ran in GUI mode is the one that will be registered in registry to run in COM mode, so if you last run older version that's the one that will be used.
If that's not it and you are indeed intentionally running an old version of VRD then you should turn off the "Enable VideoRedo QS Fix video dimension filter" option in kmttg.

As for your other question I think it's behaving as designed. Loop in GUI waits until all tasks for the matches for the just run query to complete before sleeping the configured time and then looking for more matches again. Note that each TiVo on your network gets it's own independent loop.

If you want downloads to happen one after then one option is not enable the "Ad Cut" task and turn on "Loop in GUI" option.
Then you you are ready for commercial cutting/reviewing you can run a 2nd kmttg session and start from FILES mode to run "Ad Cut" along with the VRD Review option enabled for the already downloaded shows.

i.e. One kmttg session is devoted to getting downloads as quickly as possible and another session is devoted for the manual part of the task set of reviewing commercial cuts that you only run when ready.

Yes, I'm VERY happy with it!
I saw what you did, and actually added 2 new commands as well:

enableLoopInGUI and
disableLoopInGUI

Attached is my (not as pretty in Windows) diff.

Good to hear this is working. At risk of making myself appear to have too much time on my hands, I decided to polish the design of this a little bit more. I actually realized that the idle detection was being done before new jobs are launched, so it was creating an idle.dat file when the last job just completed and it is launching the next job. This only comes up with one job running, but it was still a small bug that is now fixed. I added in orangeboy's two new commands and a few more of my own.

I like the idea mentioned by ThAbtO of having it be able to shutdown when everything is finished, but with a twist. An onIdleRun and onUnidleRun command allows us to tie all these other features together quite neatly, and you can get your auto shutdown at the end if desired. For straight GUI users, I threw in a File->Run configured idle commands and a "perform action when idle" setting under Program Options. If the action is set to "shutdown" and "Run configured idle commands" is checked, then kmttg will issue a shutdown command when it becomes idle.

Other commands can be seen near the bottom of the attached diff file under "runCommand" for details, but the list is: onIdleRun, onUnidleRun, deleteJobData, createIdleFile, deleteIdleFile, disableOnIdle, disableNewJobs, enableNewJobs, disableLoopInGui, enableLoopInGui, saveQueuedJobs, loadQueuedJobs, exit, shutdown, and exec.

An example of a control.dat file I might pass into it would be:
disableNewJobs
onIdleRun saveQueuedJobs exec:mesg.bat exit

I am now running this on my system for long-term testing. It has passed my initial testing so I'm going to go ahead and put this out there for anyone who is following along and wants to try a custom build. Obviously these features are beta/not-even-in-the-release right now, so no guarantees that moyekj will approve them in their current form, but that would be cool.

I do have a real job to get back to now so this is probably my last update for awhile, unless I need to post a bug fix, but let me know if there is any other feedback about these features.

That means an older version of VideoRedo without support for SetFilterDimensions is being run. If you have more than 1 version of VideoRedo installed note that the last version you ran in GUI mode is the one that will be registered in registry to run in COM mode, so if you last run older version that's the one that will be used.

As far as I know I am running the most recent version of VideoReDo Plus: 2.5.6.512. I downloaded it right from their site. and it does show a QS Fix option under tools...

As far as the loop in gui mode goes, I would think it shouldnt wait for all jobs to finish before looping in GUI again - since all it would be doing is just adding more tasks to the que list

The above bit of Window's batch code "primes" control.dat so the first action is to disable new jobs from being processed, "Starts" kmttg, then parses tasklist output to find curl.exe which is executing on behalf of kmttg to get the NowPlaying lists from the TiVo(s). I'd like to eliminate that last part by having control.dat actions occur earlier on, before gathering the NowPlaying lists, but I'm not sure where the NPL processing gets triggered. The rest of the batch code loads and enables any saved jobs prior to shutdown, and enables Loop In GUI.

Without adding the & SLEEP commands, I found that not all of the commands were being issued, specifically the loadQueuedJobs that followed the restart of kmttg...

I thought control.dat was deleted after the command was read? I've been delaying writing things to control.dat with sleep, that comes in the Windows Resource Kits:

<snip>

The above bit of Window's batch code "primes" control.dat so the first action is to disable new jobs from being processed, "Starts" kmttg, then parses tasklist output to find curl.exe which is executing on behalf of kmttg to get the NowPlaying lists from the TiVo(s). I'd like to eliminate that last part by having control.dat actions occur earlier on, before gathering the NowPlaying lists, but I'm not sure where the NPL processing gets triggered. The rest of the batch code loads and enables any saved jobs prior to shutdown, and enables Loop In GUI.

Without adding the & SLEEP commands, I found that not all of the commands were being issued, specifically the loadQueuedJobs that followed the restart of kmttg...

Yes, the control.dat file is deleted immediately after it is read, but it can have multiple commands (one per line). Every command is immediately executed when it reads the file, so your approach of using sleep and checking the state of processes may still be necessary to handle the timing of what you are trying to do. The exception is that the onIdleRun and onUnidleRun commands define a series of commands that will be executed when kmttg becomes idle, or goes from idle back to unidle. In that case, the commands are being separated by spaces on the same line following the onIdleRun or onUnidleRun and those are saved for later execution when that event occurs.

At any rate, the functionality I am providing here is mainly meant to be forward looking about other possible uses for this. It doesn't necessarily replace the need to do some of your own work in your scripts. It just gives you a little more power over kmttg via the control.dat interface. So, I see it as completely normal that you are creating control.dat files multiple times from your batch script. My examples were to illustrate a couple things I was testing it with.

Without adding the & SLEEP commands, I found that not all of the commands were being issued, specifically the loadQueuedJobs that followed the restart of kmttg...

Oh, I missed the significance of this earlier. As soon as the file is created, it is scooped up, processed, and deleted, so you need to write whatever you are going to write in one atomic operation. If you do multiple echo something > control.dat commands in a row, you are probably overwriting the file before kmttg polls the file and so some get lost that way. If you want to use the echo redirect statements in that simple form, you can only have one command at a time and you will need to sleep between them to let kmttg process it. If you want multiple commands in one file, you will need to write the file something like this:

Yes, the control.dat file is deleted immediately after it is read, but it can have multiple commands (one per line). Every command is immediately executed when it reads the file, so your approach of using sleep and checking the state of processes may still be necessary to handle the timing of what you are trying to do. The exception is that the onIdleRun and onUnidleRun commands define a series of commands that will be executed when kmttg becomes idle, or goes from idle back to unidle. In that case, the commands are being separated by spaces on the same line following the onIdleRun or onUnidleRun and those are saved for later execution when that event occurs.

At any rate, the functionality I am providing here is mainly meant to be forward looking about other possible uses for this. It doesn't necessarily replace the need to do some of your own work in your scripts. It just gives you a little more power over kmttg via the control.dat interface. So, I see it as completely normal that you are creating control.dat files multiple times from your batch script. My examples were to illustrate a couple things I was testing it with.

Gotcha.

I installed kmttg and pyTivo at my bro-in-law's house, and his computer is not so powerful. He asked if kmttg could be fired up at night when he wasn't browsing the web/checking email/whatever. I hadn't thought about it until he mentioned it, but Window's Task Scheduler (or cron for *nix, or whatever for Mac) could surely take advantage of these control commands!

I installed kmttg and pyTivo at my bro-in-law's house, and his computer is not so powerful. He asked if kmttg could be fired up at night when he wasn't browsing the web/checking email/whatever. I hadn't thought about it until he mentioned it, but Window's Task Scheduler (or cron for *nix, or whatever for Mac) could surely take advantage of these control commands!

Yeah, speaking of CPU power, I thought my Phenom II X4 955 which I thought was super fast last fall is too slow now and it was pushing 60C when running all the time. I started noticing some instability of my system here and there, especially in VideoRedo. I would go to bed and come back to find that my machine VideoRedo was locked up and the whole machine froze when I tried to kill it. Just yesterday, I got my Phenom II X6 1090T and it is running at a nice 45C with all 6 cores at 100% for hours. So far, no more crashing. Sweeet.

As far as I know I am running the most recent version of VideoReDo Plus: 2.5.6.512. I downloaded it right from their site. and it does show a QS Fix option under tools...

That is a very old version of Plus that doesn't have video filter support in COM mode. You can get a much more recent version (3.10.2.596) from the forums:http://www.videoredo.net/msgBoard/showthread.php?t=6972
(They haven't released official versions for a while for some reasons but the latest beta versions of Plus are very stable).

Quote:

As far as the loop in gui mode goes, I would think it shouldnt wait for all jobs to finish before looping in GUI again - since all it would be doing is just adding more tasks to the que list

It's not designed that way currently. The problem doing things that way is it can end up queuing a lot of tasks all at once which can lead to a lot of disk space use at once and overwhelming computer with tasks, thus I designed as a more distributed approach.

OK, that would explain why I thought I had the most recent version. I'll download the beta and try that.

I understand your reason for disigning it that way - is there an easy way I can force it to run the auto transfer in GUI at the interval specified, regardless of if it currently has things queued already? I have the tasks set to 2, so that it will be able to start the next download while waiting for VRD review to happen, and doesnt wait for that to close while it performs other tasks - I have a dedicated computer for this, so sucking up disk space or too many tasks isnt an issue (nor should it be, since you can specify how many tasks to run at once)

I understand your reason for disigning it that way - is there an easy way I can force it to run the auto transfer in GUI at the interval specified, regardless of if it currently has things queued already? I have the tasks set to 2, so that it will be able to start the next download while waiting for VRD review to happen, and doesnt wait for that to close while it performs other tasks - I have a dedicated computer for this, so sucking up disk space or too many tasks isnt an issue (nor should it be, since you can specify how many tasks to run at once)

Not with current code, no. I did suggest a way to accomplish what you want though using 2 instances of kmttg.

You can use the custom command to perform a shutdown if desired. When I want to shutdown after processing a batch of files, I use the custom command on a dummy .mpg file at the end of the job queue. That file only has the custom command scheduled. The custom command triggers a shutdown.

It's probably a service permissions issue. First make sure you get the transfers working properly via GUI. In latest version there's both "Run Once in GUI" and "Loop in GUI" options. It's also useful to use the "Dry Run Mode" option to test your auto transfers setup without having kmttg actually initiate downloads. If/once you get that working then focus on getting service mode to work. Consult the Wiki page:http://code.google.com/p/kmttg/wiki/auto_transfers
Pay particular attention to the section entitled "RUNNING THE AUTO TRANSFERS PROGRAM AS A SERVICE IN WINDOWS". Ultimately it boils down to permissions issues where you need to use same account you use to run GUI when running the service.

I never did figure out what was causing my problem. I never could get the dry run to supposed run anything. I finally just deleted and cleaned up the install, then re-installed and re-configured. Then everything started running. I guess when all else fails, just start over.