I had a requirement to find specific sets of strings and extract matching records, I have done this using simple Sort card.

Now the scope is expanded to include old datasets to find and extract matching records for the given strings. I can give all 15 or 20 old datasets in SORTIN and can execute the SORT to extract the matching records. But I need to know from which file, this record was extracted, Is there is anyway I can write the properties of the dataset into spool or to some dataset when a particular string is found in that file?

For 15-20 files it wouldn't take more than 2-2 hours to prepare your report if you do it each one by one as also advised by Nic.
However, if the datasets are more in number then you need to have programs in place to create the cards dynamically and then execute which is like spending more time than doing each one at a time.
In short write a cobol program and create a job use INTRDR to submit a jcl.

Why not one job with 15-20 steps. It would take less than 10 minutes to set up. The ISPF editor is good for this sort of thing - create the job card (copy from another job), create step 1, repeat step one n times, edit the dataset names in steps 2 - n. Submit. Done.

@RahulG31 - I have other rules which will not be possible with ISRSUPC utility, hence I felt SORT would be better option.

@Nic - the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that. Anyhow, I planned to do it in batch, say for every 20 datasets create a job.

the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that.

Do you not know ho many generations you retain, We roll off after 30 generations? or you can add one step which copies everyday data to a new GDG and by this you don't even have to worry for all this procedure, all you got to do is to look into that new generation.