don't have any dba support at our site. the site we use, the dba's are usually posting here.
so instead of asking them and waiting for them to get an answer from the board, figured I was just man-up and admit that I don't know everything and ask the question:

A cobol program SELECTs everything from a table, change names, streets, etc. with INSPECT CONVERTING,
the output of the cobol program is used in a LOAD- reuse (or replace??).
until now, we have not modified the lengths of varchar columns.

my question, if I change the length of a varchar column, what effect does it have on the db2 table?

like an idiot, I assumed (that's the idiot part), that the table was dropped, created, then loaded, but it is not.

So I am assuming (being an idiot again), that the reuse/replace parm tells db2 to replace the old row with a new row.
if each row is the same (as far as varchar lenghts are concerned, i did not forsee a problem.

some of the varchar columns are index columns, and we have been modifying the data values, so indexes are being deleted/inserted anyway.

currently with the load-reuse or replace (have to find out which)
I assume we are not doing rstats or anything else. just load-and-go.

but if we drop/create/load - that process my be faster,
but would we be required then to do rstats, etc... before --going?

but, we may have to change the varchar lengths (mostly smaller, but some larger) and I am worrying about the effect of varchar length changes on the load process -base db2 table,
as I said, we have been eating the index changing anyway.

we dump/change/load 6 tables, total about 60 million rows now in 1/2 hour. (30 minutes wall clock.)

I do not want this to become a 2 hour process (or even a 45 minute process).

So, educate me (and others - by the way) on the affects of varchar length changes on a LOAD - replace or reuse.
would it take longer to drop, create, load?

I think, loading a table with lot of VARCHAR columns would take more time, when compared to tables with CHAR columns, because load utility has to read the length field of the column in the input dataset for each row and then load the data. This process would take more time, after reading the length field DB2 knows how much of data it should read in the row for that column, so i think, it doesn't matter if its a big/small varchar column. Please correct me if i am wrong.

Drop/Create/Load process may take sometime because, dropping a table would trigger dropping all its dependent objects like, indexes, views, synonym and invalidates all the dependant packages. Whereas LOAD REPLACE with REUSE gives you a little performance advantage by not deleting and redefining underlying DB2 Datasets and statistics can be gathered during the load itself.

Can you tell, what is the length of the varchar columns and is the table partitioned or non-partitioned ?

when we are dealing with tables with more than 5,000,000 large rows,
we use BMC utilities. Less than that, we use DSNUTILB.

My basic question was (due to total ignorance),
when I modify the length of VARCHAR columns, will I impact the load?
in the case of data replace, does not matter. it is a new row on in a clean table.
The fact that the table was deleted of all rows (Tablespace RECORDS),
based on my load parms, made this a no-brainer.

Will have to experiment with other parms. honestly, do not know what the resuse parm is, have not used it.

Hi,
First, you can issue an UNLOAD from the table, using //SYSREC0 DD DUMMY just to have the //SYSPUNCH file, which is the exact description of the table. Then use this SYSPUNCH file for the LOAD (one of LOAD parameters) to have table's description and reduce overhead.
Second, you mentioned you have BMC utilities, if you can use BMC CATALOG MANAGER, you can either UNLOAD & LOAD tables via interactive panels, you can create the relevant DB2 commands, save them in a PDS, and run them later via BMC CATALOG MANAGER or SPUFFI.
That's best way I know.
If you want, send me a mail to e-addr removedand I can provide you a LOAD procedure to run under a JOB.
All the best,
Haim Zeevi