Perhaps the reason COBOL is ignored in text books is that, once you strip
away the substantial run time support required, it seems to be a fairly
uninteresting language from an implementation viewpoint.

Also, there is probably a perception that COBOL program performance is
limited by I/O rather than CPU. I know from personal experience that this
is not always true, but certainly it is often true, especially in
transaction processing, which is where performance issues are often most
palpable.

In the case of I/O bound programs, most traditional optimization
techniques are not useful. Locality of reference optimizations on the
data base could be useful, but it is not clear that the compiler can do
this.

There is one optimization which I am curious to know if anyone has ever
implemented in a COBOL compiler. COBOL has a lot of data types for
numeric values. Computation on some of these (eg ASCII rep, packed BCD)
is often slow compared to computation of binary values, especially as many
machines have no support for these types built into the instruction set.

Has anyone implemented a COBOL compiler which attempts to replace
expensive data representations by cheaper ones? Two methods of doing this
which come to mind are flow analysis and lazy conversion.
--