The above should not change the output you get, but there is no reason to change the first portion of your record to its original content. You are "overlaying", so leave the first part of the record alone, put your new zero, then pick up the end of the record - don't specify a length, DFSORT will do "from 29/26 to the end if the record" for you.

You might find a simpler solution with PARSE if you records are tab-delimited.

Yes I was doing PARSE only in the OUTREC. But the thing is - my input will be having amount fields, when ever zeros coming as an amount value I will be getting that value as 0.00 but if it is greater than zero it will be "xxxxxx.00". In order to make the parse condition consistent I am doing this INREC OVERLAY.

in the first row after 0001 the value came as 0.00, but in the second row the value came as "25,000.00". here I want to make my PARSE condition as a consistent one in all the record where I am doing parse in the outrec.

At a guess, then, you are getting "25,000.00" because your file is being created as a genuine "comma-seperated" file. The double-quotes "protect" the "," in the number.

This would mean that it is not only zero which will give you a problem, but anything less than 1,000 will, as that will not contain a comma either.

"Something" has then "converted" the comma-separators to tabs and not bothered to remove the double-quotes which no longer have a function.

The "neatest" solution would be to get the creation of the file as genuinely tab-delimited. Tab delimiters are often chosen because the tab character, unlike the comma, is unlikely to genuinely appear in plain text data.

Next "neatest" is to change the definition of the output numeric fields so that they do not include the commas.

Next "neatest" would be to change the "fixer-upper-to-tabs" so that it removes the double-quotes.

Next "neatest" is to use PARSE and deal with the number formats yourself.

On the other hand, if you think about it, if that tab-delimited file was going back into a spreadsheet program, there is no problem with the data content one moment being 10.00 and the next "25,000.00". Are you sure that the data actually causes you a problem, or are you only expecting that?