Hi Guys,
I have requirements, I need the Job details like( Job Number, name , user) of any job as soon as they entered in Qbatch Subsystem.
Do we have any API or some commands to fetch that...
The only solution i m thinking is of take the *Print of WRKACTJOB command for QBATCH every minute and extract the details from spool file. But i dnt find the this solution as optimal.
Anyone has some better idea, Kindly Suggest.
Thanks,
Deepak

Answer Wiki

Thanks for updating me. But the problem being is i don’t have authority for ADDEXITPGM and neither we can start and stop the subsystme to achieve.

Do we have any other option.

What i really need to do is…

I need the Job Number, Name and user of the job as soon as the Job enter the QBATCH Subsystem, and after extracting it, I will use the API QUSRJOBI, to track the status of the JOB, like running, MSGW e.t.c

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

Processing your response...

Discuss This Question: 11 &nbspReplies

There was an error processing your information. Please try again later.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

Your best choice would probably be to register a data queue on the Job Notification Exit Point. You can have a job monitoring for data queue entries and logging whatever activity you think is necessary.
If your system is really old, that exit point might not exist. And if that's the case, your best choice would probably be a simple routing program for the QBATCH subsystem. The routing program would log the activity and then transfer control to QCMD.
No matter how it's done, printing WRKACTJOB or printing anything and processing the spooled output is not a good choice. It's useful that you suggested it because it helps us understand what you want to accomplish. But you should always think that there are better ways than trying to get the information from spooled output. Post questions like this one to learn.
Tom

If you don't have authority, then the task should not be assigned to you. Or tell the person who does have authority to run the ADDEXITPGM command for you. There must always be someone with enough authority. You can write the program, and someone else can add your program to the exit point.
As for changing the subsystem, you don't need to end it. You can change routing programs for active subsystems.
There are no good alternatives. The system provides two methods. You should use methods that are provided because they are documented by IBM and everyone can know what they do.
If your developers have little authority, it means someone is in charge of moving programs into production. That person needs to do his job. That person needs to run the command or be replaced.
If you are told to do it, then you need to be given the authority. Choose your method, and either change the subsystem or get the exit program added.
Tom

You can test a routing program by creating your own substem. Create a job queue, create the subsystem, add the job queue entry to the subsystem, set your routing program in your subsystem, and submit a job to your new job queue. -- Tom

Hi Tom,I got the authority...for the command...here is my code: ADDEXITPGM EXITPNT(QIBM_QWT_JOBNOTIFY) FORMAT(NTFY0100) PGMNBR(*LOW) PGM(MYLIB/MYDATAQ) PGMDTA(*JOB 24 '0001*ANY *ANY') where i can specify the subsystem...as i only need the job from qbatch.I also got the authority to start and stop the Qbatch subsystem.Thanks a lot Tom

You specify the subsystem name and library in the 'Program data' parameter. That's how you associate the entry with QBATCH. Note that there are single-quotes around the data part.
You would create the data queue with this command:

You can name the data queue whatever name you want and put it in any library as long as the ADDEXITPGM and CRTDTAQ commands refer to the same names.
You can add the exit point entry and watch the entries on the data queue to test if it's working. You don't have to process the data queue entries until you are ready to start putting records into your logging database or whatever you want to do with the information.
If entries build up on your data queue, you can clear the entries at any time by running this command:

CALL PGM(QCLRDTAQ)
PARM(MYDATAQ MYLIB)

That will set the data queue to a fresh start with no entries. You might use that command while testing the program that will receive the data queue entries.
You can dump the data queue at any time with this command:

DMPOBJ OBJ(MYLIB/MYDATAQ)
OBJTYPE(*DTAQ)

The spooled output lets you see entries on the queue so you'll know what your program will be receiving.
Tom

Thanks a Lot Tom..You are really helpful in solving the problem....I really appreciate that..But now the only problem seems to be....We cannot start and stop the Qbatch Subsystem...As it is used by peoples's across the globe at diffrent time zones...so all the times...the jobs are running in the qbatch..In your previous post you told about the rotuing programs...It will be really helpful if you could explain this concept....As i really dont have any idea of the same...Thanks.Deepak

The QBATCH subsystem can be ended and restarted in 5-10 seconds, hardly enough time for anyone to notice nor even care. It shouldn't cause a problem for anyone, especially if you have a system management task to be done.
A routing program for your situation can be simple. It would be a CLP and not CLLE. The absolute minimum routing program that you could test would consist of a single command:

TFRCTL PGM( QSYS/QCMD )

The TFRCTL command is only valid in an OPM CL program. A more useful program could begin by looking something like this:

The subsystem monitor will call the routing program that is associated with the routing entry for a submitted job. You can display subsystem descriptions to see existing routing entries, and you can look at job descriptions to see what different routing data entries might be used. (Most job descriptions will probably use 'QCMDI', but you can use any values you want. Different routing entries can react to different routing data. That lets you set resources to fit the kind of jobs that use the matching routing data.
For QBATCH, there are usually four routing entries. Three of them are for special jobs. (One is for System/38 environment jobs.) The last one is a "*ANY" routing entry that catches any job that isn't selected by the earlier routing entries. Essentially every job in QBATCH will be handled by the last routing entry unless you have customized QBATCH already.
My example routing program only retrieves a few job attributes and puts them into a message on a message queue. That is for testing. It lets you know that the routing program ran and it tracked the job.
After it sends the message, it ends itself by transferring control to the system command processor, QSYS/QCMD. If you look at the current routing entries for QBATCH, you should see that QSYS/QCMD is the usual routing program.
Your routing program can do any logging it wants, and it can remove itself from the call stack by transferring control to QCMD. As long as your program doesn't remove the request message that was submitted for the job, QCMD should start up and run the job as it always does. Your program will be out of the way.
My test program sends a message to a message queue named MYLIB/LOGQBATCH. You can replace the SNDMSG command with anything you want to do your logging. You might replace it with a CALL to the QSNDDTAQ API to put the job attributes on a data queue that you create. Then, like the exit point solution, you can have a job running that receives those entries and adds records to your logging database.
You want the routing program to be as short as possible. You don't want it to access many objects and you want it to finish quickly. If it only sends a data queue entry, it will finish immediately. It won't need a lot of authorities and there won't be many chances of it running into problems. All of the heavy work can be done by the data queue monitor program. The format of the data queue entry can be anything that you decide is useful.
When the routing program runs, it runs under the job that started. That means it runs under the user of the job in QBATCH. That means you'll have to watch the authorities needed by the routing program. That's the big reason to keep the program to the minimum. Don't have it accessing a lot of external objects. If necessary, you can compile it with USRPRF(*OWNER) and have it adopt owner authority. Make sure that *PUBLIC has *USE rights to the routing program.
To set up testing, you could do something like this:

That creates the test library, creates a test message queue, compiles the program, and finally sets the highest routing sequence number for QBATCH to call this program instead of QCMD. As usual, you can use any names you want as long as all commands use the same names.
Also, it would still be better to create a test subsystem description, and run your tests through that subsystem until you are comfortable with how it all works together.
Tom

Tom,I have one more queries, I m sure you can help me out. Its not related to this. But i m putting it up here only, Let me know if you want i should put in some question/heading. Question: I have to copy the spool file to the PC Drive in text format, I searched hard for how to map the IFS Drive but could not get the solution.I have to copy the spool file, Changed the Name of the Spool file, Create folder with the current date and then i should access that folder from PC.Thanks,Deepak

I would probably first use iSeries Navigator to get the spooled file. Drill down to your printer output, then drag & drop the spooled file to your PC. You can only map a drive to your server if your server is exporting a shared directory for you to map over, and it can only export a share if you have properly configured NetServer and you have NetServer running. (A drive doesn't have to be "mapped". You can just use the UNC for the shared directory.) You'll need to research NetServer before trying to learn details. -- Tom

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

Processing your reply...

Ask a Question

Free Guide: Managing storage for virtual environments

Complete a brief survey to get a complimentary 70-page whitepaper featuring the best methods and solutions for your virtual environment, as well as hypervisor-specific management advice from TechTarget experts. Don’t miss out on this exclusive content!

Share this item with your network:

To follow this tag...

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy