S0c1's and S0c4's are hard to check in a program;
these kind of things are found in desk-checking and run-time debugging.
That plus experience, which allows one to avoid them by proper coding in the first place.

buuuuuuuuuuut:
S0c7's - never an excuse to have one of these runtime.

the only datatypes that you can count on are db2.
any other source of data,
file,mqs, cics input screens,... should be validated by code before their use.
i.e., signed-numbers are signed-numbers, unsigned-numbers are unsigned-numbers.

Yes Dick and Crag you all are right and even most of the times we follow and do the testing very correctly but my thought of asking this question was like thru some variable like RETURN-CODE if there any other variable if would have present then our all error handling task and taking the actions accordingly would have done at program level it self.

Rohit,
not really sure what you are asking, but i will give an answer, anyway.

Batch Cobol does not have an equivalent CICS Handle command.

once your program falls into a S0C?,
the ops-sys has control
and you never see light of day again (your code is no longer executed).

I normally have a 'flag' PROCESS-FLAG
with 88 levels of STOP-PROCESS and CONT-PROCESS.

during any interrogation (IF statement)
I decide that there is an unrecoverable error
(e.g. non-numerics in numeric field and I have to stop processing
because the input should all be valid or rejected.)
I SET STOP-PROCESS TO TRUE
perform some kind of error display routine
and exit the section.

All my PERFORMs are predicated on IF CONT-PROCESS

that way i can come to an orderly end of program
without so GOTO out of section/perform.

You have to write the code to
perform validation
perform error reporting
perform error action
and
there is no SPECIAL REGISTER.
you have to maintain your own 'flags'.
This is what I believe Mr. Giegerich is referring to as Defensive Programming.

whereas a S0C7 can be prevented by 'defensive code'
and
Perform some vaild error routine to skip that record
and continue futher processing

If you encounter a S0C7 and decide to just massage the data
instead of adding the necessary 'defensive code'
you can save a compile/relink
until the next file with invalid data comes
(and you are not ON-CALL)
but, you never have the audit of an exception/error report.

Be careful what you wish for. In my experience problem resolution is more difficult when the application does not abend with a dump.

Your hypothetical switch that would enable the application to trap and handle xC7 abends would make it very difficult to figure out the root cause of the problem.

An abend is a programmer's friend.

If the application does not abend, you are utterly dependent on whatever the application chooses to log or display as part of its error handling process. If the log is well designed and the programmer uses it properly you should be fine. That's a big IF. My experience with applications that are designed to "never abend, never surrender" (I am speaking of user abends now, not system abends) is that most programmers log only a minimal amount of information when a serious error occurs.

whereas a S0C7 can be prevented by 'defensive code'
and
Perform some vaild error routine to skip that record
and continue futher processing

If you encounter a S0C7 and decide to just massage the data
instead of adding the necessary 'defensive code'
you can save a compile/relink
until the next file with invalid data comes
(and you are not ON-CALL)
but, you never have the audit of an exception/error report.

Yes I am perfectly fine with this statement.
But many a times we get the input file from other system and though our program is perfect to deal with S0C7 but if input file has some bad/junk data then in such cases the job fails and then one has to remove the bad record and restart the job just to replace this activity I thought of haivng above checking in the application program itself so that after the end of the job we can check the error file and then do our analysis for that perticular bas/junk data(if any).

you code to check the validity of datatypes and business allowed values.
checking for datatypes allows you to avoid S0C7's
and either continue processing with an exception report
or bring your program to an orderly end.

allowing a program to abend with a S0C7 is a waste of resources,
and the error processing the ops-sys executes to handle the S0C7
is resource intensive.
you can bring a system to its knees if half the batch processing in abending.

as far as avoiding abending via a S0C4,
those can be avoided thru use of
IF pointer NULL
or
IF ADDRESS OF linkage-item NULL.

normally, if prog A is reading file and doing some processing and meanwhile if a bad/junck record comes then it may abend with S0C7. So in this case the entire JOB executing the prog A is in abend status and it may delay or break your SLAs of batch cycle.

So if have turn around to be like if I get S0C7 because of any perticular record then progA should not abend but carry on processing the further records. Now the bad/juck records can be written to the error file and eventually when job completes we can correct the data and process that record later.

Also S0C7 is just a example but any such system erros if can be treated in above fashion then would be very flexible for programmer to write a CODE

As has been mentioned in this thread -- numerous times -- the only reason for this to happen is bad / sloppy coding. As a system programmer, I have repeated to the application programmers many times that S0C7 abends are completely under their control. If they code their program to expect and handle invalid numeric data, then they will not get S0C7 abends -- EVER! Other than deliberate S0C7 abends for testing software upgrades, I haven't had one in my COBOL programs for many, many years.

Other abends, such as x13, x37, 001, 0C1, 0C4 system abends, are not under the programmer's control and frequently occur despite whatever the programmer does since they can be caused by factors external to the program.

normally, if prog A is reading file and doing some processing and meanwhile if a bad/junck record comes then it may abend with S0C7.

Should be grounds for termination.

As you mention - the file came from "outside" - meaning the contents are unpredictable.

One way this has been handled many places is to pre-process the file and make sure every field is valid (might not be correct, but it is no great effort to validate the contents). When there are any errors, someone needs to decide if the process continues without these records, these records are placed in a "suspense" file, or the invalid data changed that might be incorrect but would not abend.

It would be a worthwhile endeavor to "scrub" input files from an outside source and keep a record of the anomalies found, then report these to management and let them take action.

Your "scrubber" program should be compiled with options TRUNC(BIN), NOOPT and NUMPROC(NOPFD).

Be careful with COMP-3 fields during mapping.

For example, if the mapped-field is packed-decimal signed and the actual data contains an 'F' sign-nibble, it will fail a NUMERIC test. So, to ensure valid packed-decimal numerics, redefine the mapped-field field as packed-decimal unsigned and issue a second NUMERIC check. If both fail, then you've got bad data. If the NUMERIC test fails for packed-decimal signed but is good for packed-decimal unsigned (2nd choice) then move the field to the packed-decimal signed field and the compiler will ensure a 'C' sign-nibble is forced.

Yes Bill, Dick: you made a good point here of removing such bad/junck data but I was wondering for something a ready made system variable like RETURN-CODE if can be made exist then it would make our task much more simple and not need to have above logic additionally for any 'S*' errors.

Quote:

It would be a worthwhile endeavor to "scrub" input files from an outside source and keep a record of the anomalies found, then report these to management and let them take action.

Very very difficult to implement this as every process is so dependent on each other and running fine for past couple of years.

If everything has been running fine and now, all of a sudden, things are not, you need to research this and find out which "Feed" is causing the problem.

You could be sitting on a real mess, because if the integrity of the data is not guaranteed, the likelihood of this same data-originator maintaining good data on existing feeds, will put in question the integrity of existing and all new feeds.

Something has happened somewhere along the lines, but validating data will add much overhead to you current processing program. The originators-data needs to be validated first or guaranteed.

Very very difficult to implement this as every process is so dependent on each other and running fine for past couple of years.

Sorry, but no - it is Not difficult to ensure that each field has valid data (as i mentioned before, it might not be correct, but it would be valid).

There is no valid reason for data on an input file to cause an abend. If this happens, it is only poor programming. Probably twice - whoever wrote the code to create the external file and whoever did not ensure this file would process correxctly (i.e. no abends).

It would be a worthwhile endeavor to "scrub" input files from an outside source and keep a record of the anomalies found, then report these to management and let them take action.

Your "scrubber" program should be compiled with options TRUNC(BIN), NOOPT and NUMPROC(NOPFD).

Be careful with COMP-3 fields during mapping.

For example, if the mapped-field is packed-decimal signed and the actual data contains an 'F' sign-nibble, it will fail a NUMERIC test. So, to ensure valid packed-decimal numerics, redefine the mapped-field field as packed-decimal unsigned and issue a second NUMERIC check. If both fail, then you've got bad data. If the NUMERIC test fails for packed-decimal signed but is good for packed-decimal unsigned (2nd choice) then move the field to the packed-decimal signed field and the compiler will ensure a 'C' sign-nibble is forced.

I'd go for TRUNC and NUMPROC as used in the rest of the system.

I don't see a need for NOOPT - get it over quickly.

If using NUMPROC(NOPFD) you have to make a decision about whether a sign which does not conform to picture is to be treated as "valid". Then if you discover it isn't "numeric", that is how it is operating in all your programs and it is down to the installation setting NUMCLS. DON'T CHANGE IT WITHOUT A LOT OF THOUGHT, ANALYSIS and TESTING.