Been running some test, to try to use one update program to help with recompiles and maintenance. "UPDATEPRO" in my example is the update program "PROGRAMA" calls "UPDATEPRO" for all I/O.... FILEA and FILEB are Phyical Files.

"PROGRAMA" Program info...

WORKING-STORAGE SECTION.

01 WS-FILEA-SAVE. COPY DD-TESTREC OF FILEA.

01 WS-FILEB-SAVE. COPY DD-RECTEST OF FILEB.

01 FILE-STATUS PIC X(2).

PROCEDURE SECTION.

CALL "UPDATEPRO" USING WS-FILEA-SAVE
WS-FILEB-SAVE
FILE-STATUS.

"UPDATEPRO" Program info...

SELECT FILEA etc...

FILE SECTION.

FD FILEA
01 FS-FILEA. COPY DD-TESTREC OF FILEA.

FD FILEB
01 FS-FILEB. COPY DD-RECTEST OF FILEB.

LINKAGE SECTION

01 LNK-FILEA. COPY DD-TESTREC OF FILEA.

01 LNK-FILEB. COPY DD-RECTEST OF FILEB.

01 LNK-FILE-STATUS PIC X(2).

PROCEDURE DIVISION USING
LNK-FILEA
LNK-FILEB
LNK-FILE-STATUS.

The read statments in procedure...

MOVE LNK-FILEA TO FS-FILEA.

READ FILEA
INVALID KEY
CONTINUE
END-READ.

MOVE FILEA-STATUS TO LNK-FILE-STATUS.

MOVE FS-FILEA TO LNK-FILEA.

The Rewrite statments in procedure...

MOVE LNK-FILEA TO FS-FILEA.

REWRITE FS-FILEA
INVALID KEY
CONTINUE
END-READ.

MOVE FILEA-STATUS TO LNK-FILE-STATUS.

MOVE FS-FILEA TO LNK-FILEA.

So I added a field to the end of FILEA and just recompiled "UPDATEPRO". Called "PROGRAMA" which calls "UPDATEPRO" for all I/O and Reads REWRITE etc... And no problems.

Then DFU FILEA to put something in the new field on FILEA. Didn't recompile. Called "PROGRAMA" which called "UPDATEPRO" and when the REWRITE happened the file was still fine. Was not for sure if compiling "PROGRAMA" would cause that new field in FILEA to be spaced out or not with the 01 level MOVE in "UPDATEPRO"?

So I am thinking setting up like this will work in reducing compiles. Even though I added a field to the FILEA, it didn't mess up the call between the two programs or Read, Rewrite etc?
Am I assuming correctly or can anyone tell me I might be missing something or could have problems with this?

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

Processing your response...

Discuss This Question: 6 &nbspReplies

There was an error processing your information. Please try again later.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

The statment....
Was not for sure if compiling "PROGRAMA" would cause that new field in FILEA to be spaced out or not with the 01 level MOVE in "UPDATEPRO"?
Shoule be...Was not for sure if compiling "UPDATEPRO" would cause that new field in FILEA to be spaced out or not with the 01 level MOVE in "UPDATEPRO"?

...and just recompiled "UPDATEPRO".
Because you only recompiled UPDATEPRO, the length of PROGRAMA.WS-FILEA-SAVE will be shorter than UPDATEPRO.LNK-FILEA.
If the new field is at the end of UPDATEPRO.LNK-FILEA, there's no way to be sure what might be in it when UPDATEPRO starts. Because UPDATEPRO.LNK-FILEA is a LINKAGE item, it is actually referencing memory associated with PROGRAMA. But PROGRAMA doesn't manage the memory where the new field is referenced by UPDATEPRO. The value that is handled by UPDATEPRO will be unpredictable upon entry to UPDATEPRO, and the effect of changing that memory will be unpredictable upon return to PROGRAMA since UPDATEPRO doesn't really know what PROGRAMA thinks is at that address.
Most of that assumes that FILEA is a physical file or a logical file with an implicit field list (or an explicit list that is always updated to include new fields).
If FILEA is a LF with an explicit field list that doesn't include the new field, then the value of the new field should be whatever you defined as its default value when a new record is inserted or will be the current value when existing records are updated.
It's perfectly possible for the new field value to have been preserved, but that cannot be relied upon. It's good luck if it was preserved. It will be bad luck on days where it gets messed up. (Expect the first such day to arrive at the worst possible time.)
In short, if this is something that you actually want to do, make sure that FILEA is a logical file that explicitly lists only the fields from the physical file that you want to handle. When you add a field to the physical file, don't add it to the explicit field list of the LF until you are prepared to recompile programs that reference the LF.
You can make a lot of changes to a PF; and by using LFs, you can isolate programs from PF record format changes. You might, for example, have UPDATEPRO do its I/O to the PF but use a LF record format to define LINKAGE items. Use MOVE CORRESPONDING to copy fields from the LINKAGE area to the actual record images. You'd still recompile UPDATEPRO whenever the PF was expanded, but you wouldn't be required to change the LF definition. If PROGRAMA also used the LF definition, the two programs could remain in sync.
Tom

Thanks Tom. What you say makes sense and when I was testing it just had a feeling it wasn't right and why I wanted to ask what other people thought.
So even if a Logical has one less field explicitly defined than the PF file, If I read a record in with a Logical and then write it back out with the Logical file it won't put garbage or overlay the field or fields in memory it doesn't know about? I think I would benefit for using this in isolated programs so I would't have to recompile them all the time.
I was trying to have one program that did I/O being called by multiple programs. But even if I use the Logical in the Linkage section, there is probably going to be one program that needed to use that new field and call the update program. So I would be back to also updating that logical and recompiling everything anyways. So it basically matters more how I am using the program to decide if using Logical's will give me any benefit.
Also I realize using embedded SQL would be the good path for lessoning compiles, but just not 100% sure on if putting embedded SQL in programs going to DDS defined files will cause a perfomance hit from the traditional I/O?

...it won’t put garbage or overlay the field or fields in memory it doesn’t know about?
That depends on your definition of "garbage" and whether the operation is an INSERT (or UPDATE. Any UPDATE should leave the value in the unknown field alone. An INSERT should create a record with the default value for the unknown field. If a default isn't defined, it'll generally be either blanks or zeros.
I realize using embedded SQL would be the good path for lessoning compiles...
How so? Are you thinking you could add a field to your file and not need to recompile? As soon as you need to reference the field in that program, you'll still need to recompile to put in the coding change to use the field and to change the SELECT column list.
If that SQL program doesn't need to reference the field, then there are no changes needed and no compile needed. But how is that different from using a LF that doesn't list the field?
Or are you thinking that you'll need to add the field to the LF, and recompile every program that uses the LF because one of the programs needs to use the field even if the rest of the programs don't? Then don't add the field to the LF -- just compile a new LF that has the field and use the new LF in the program that needs the field. No need to touch any other programs and no need to change a LF that's already working fine.
Seriously, how often are new fields added to a given file? Creating a new LF isn't much of an effort and it doesn't affect the system much unless it requires a new access path (new key fields). Most LFs ought to be sharing access paths.
Over some number of years, a collection of LFs might be pruned down. But mostly that shouldn't be much more than pointing a few programs at a new LF and submitting recompiles. Those can be phased in over as long as you want. Since the new LFs will already have been in use for a while, there shouldn't even need to be much testing, if any at all. If old LFs eventually fall out of use, then delete them.
there is probably going to be one program that needed to use that new field and call the update program.
The main I/O program will always need to have access to the physical file and to all of the fields. It needs to know what record format is being used by any program that calls it. If there are ten LF formats, it can redefine the LINKAGE item with those formats and use MOVE CORRESPONDING to or from the appropriate format name.
At least, it can do that if the calling program sends it the name of the format that it needs. That should be just about the only requirement -- one parm to name the format and another parm to pass the data structure for the LF.
So, there would be four compiles when a field is added.-- the PF and new LF need to be compiled, the I/O program needs to be recompiled to pick up the PF and to include the new LF, and the program that uses the new field will be compiled.
If you're going to be adding fields, you can't get away from the PF and the I/O program no matter what. Those are constant efforts regardless of how you do everything else.
The effort that you need to look at is in everything else around those items. I can't see a way to minimize the extra effort below one LF and one program. Note that if you need to change more programs for the new field, the added effort is still contained only within the code that actually needs changing.
Adding a field isn't a trivial task. Planning and preparation doesn't eliminate the work. Hopefully, it helps reduce future work.
Also, always stay aware that DB2 is a relational database. Sometimes adding a field isn't the best way to go. Sometimes it's better to add a related table. Maybe a JOIN to a related table is a better approach. Stay aware of how relations may be used instead of changes to table definitions. A JOIN LF can normally be used as easily as any other.
I don't know if I've confused the issue or not. Mostly I'm just writing around the whole problem in case I touch something useful.
Tom

.....An INSERT should create a record with the default value for the unknown field. If a default isn’t defined, it’ll generally be either blanks or zeros.
I was more concerned of Overlaying something there on a Rewrite, since didn't know about. If INSERT I would expect zeros or blanks, by garbage meant something weird other than the normal zeros or blanks. What you said makes sense.
..... Are you thinking you could add a field to your file and not need to recompile? As soon as you need to reference the field in that program, you’ll still need to recompile to put in the coding change to use the field and to change the SELECT column list.
No I do understand no matter what I have to recompile the programs that need to use the field it is the other programs that don't need it I am thinking about.
....If that SQL program doesn’t need to reference the field, then there are no changes needed and no compile needed. But how is that different from using a LF that doesn’t list the field?
Valid point, a program using SQL or Logical with explicit fields would cause the same amount of compiles.
....The main I/O program will always need to have access to the physical file and to all of the fields. It needs to know what record format is being used by any program that calls it.
Yes I agree.......
.....It needs to know what record format is being used by any program that calls it. If there are ten LF formats, it can redefine the LINKAGE item with those formats and use MOVE CORRESPONDING to or from the appropriate format name.
At least, it can do that if the calling program sends it the name of the format that it needs. That should be just about the only requirement — one parm to name the format and another parm to pass the data structure for the LF.
Okay interesting, so I add the new Logical format to the end of the Linkage of the I/O program and the Call of the programs that need it with a parm to say which format needed.
The programs that still us the old Logical format in the CALL and the I/O program old logical reference would still fine because the sending between those programs reference the same space in memory? The main thing is keeping the calling program Logical format sent to the I/O program used and sent back the same?
....Adding a field isn’t a trivial task. Planning and preparation doesn’t eliminate the work. Hopefully, it helps reduce future work.
Agree. And seems as if the everyone has done there own thing just trying to get more consistency.
Thanks you have provided some good food for thought. :)

I was more concerned of Overlaying something there on a Rewrite...
A "rewrite" would be a COBOL-specific form of a more generic "update" operation. A "write" would be the COBOL equivalent of "insert".
Overlaying would happen of the COBOL statements caused some overlay. E.g., if COBOL defined the PF record format and redefined the same area with a LF format that had a different list of fields, and then the COBOL moved values into the LF field definitions, and then COBOL wrote (or rewrote) the PF record, then, yes, overlay would be a concern.
But also note that if COBOL WRITEs (INSERTs) a PF record format without moving data into each field, any unreferenced fields will contain whatever was in the COBOL memory at those addresses. If COBOL instead opened the LF and wrote the LF format, then DB2 has the responsibility of supplying default values. When COBOL hands a buffer over to DB2, DB2 doesn't know if COBOL put correct values in each position of the buffer.
If the buffer belongs to the PF, then DB2 interprets the memory according to the PF format. DB2 expects each field to be there. If the buffer belongs to a LF, then DB2 knows about fields that are not represented in the buffer.
That ties back to the thought of defining LINKAGE items with LF formats and using MOVE CORR into a PF format. The PF format should be initialized for defaults first if an WRITE is being done, but everything should automatically be initialized properly by a READ before any REWRITE.
The difference between updating a PF (a TABLE) and updating a LF (a VIEW), that is, which definition is OPENed, is an area you'll want to stay aware of.
The programs that still us the old Logical format in the CALL and the I/O program old logical reference would still fine because the sending between those programs reference the same space in memory? The main thing is keeping the calling program Logical format sent to the I/O program used and sent back the same?
If an old program used an old LF structure and the I/O program used the same old structure when called by that program, it really shouldn't matter at all.
All of the programs will always use the same space in memory (from the I/O program's viewpoint). The I/O program only ever sees two things -- the first thing is an indication of which format is used and the second thing is the field containing the block of data. The I/O program only needs to know which definition to use when it extracts from the block of data. All of the LF formats will redefine the second item in the I/O program.
The I/O program only ever sees two things --
Naturally, that probably isn't exactly true. For example, there will probably be another item that defines an operation code. Maybe it'll be FETCH or UPDATE or DELETE or whatever. The I/O program needs to know why it's being called. And there may be another item -- return code. The I/O program probably ought to tell the calling program what happened -- success or failure.
You might have separate linkage items for each thing that you want to communicate, or you might pass a "communication" data structure that leaves extra space at the end for potential expansion. It doesn't matter much if you reserve space at the end of a structure since just the address is passed anyway.
Tom

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy

Processing your reply...

Ask a Question

Free Guide: Managing storage for virtual environments

Complete a brief survey to get a complimentary 70-page whitepaper featuring the best methods and solutions for your virtual environment, as well as hypervisor-specific management advice from TechTarget experts. Don’t miss out on this exclusive content!

Share this item with your network:

To follow this tag...

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States.
Privacy