I am using C++ OCI LIB, to insert some report data from remote OCI client to oracle 11 server. This data is read by another process to create the report.

The DB CHARSET is UTF-8. But the report tool expects the data to be ISO08859-1 encoded. So while inserting the data into the database i specify the following LANG and CHARSET for my table colulmn in client:

You need to understand when you insert in ASCII characters to a UTF database OCI automatically converts ascii to utf when you insert and utf to ascii when you read data. I guess something goes wrong in this auto conversion.

IMHO Best would be to insert in UTF characters to the db so they would go in without any conversion.
And you can read them in any charset you wish...

One more thing make sure that you have beta character in your display character set.

The problem is that the ASCII as the application that reads the data my application is inserting expects the value ONLY with ASCII encoding [The HEX value of the inserted data has to be DF for them to decode it properly].

But when we are trying to insert beta character *[β]* with ASCII encoding so that it goes as DF, OCI client library is inserting the data as EMPTY STRING.

But we can not currently change the implementation of the application that reads the data as it is 3rd party.

You should realize that the character set information that you give in the inserting client must always match the encoding of data that you put into the buffers. The character set of the database is not relevant here (though it is relevant to ensure that data can be stored without using replacement characters; UTF-8 is OK here.) The NLS_LANG encoding will be used for the INSERT statement itself and all literals included in it. The OCI_ATTR_CHARSET_ID value will be used for the content of the bind buffer. If you set 817 for OCI_ATTR_CHARSET_ID, then the data in the buffer must be UTF-8 encoded. If you set NLS_LANG to .WE8ISO8859P1, all text literals in the INSERT statement must be LATIN-1 encoded. Therefore, you must either convert the data to the configured client character sets (not a good idea, in general) or you have to configure the character sets (NLS_LANG and/or OCI_ATTR_CHARSET_ID) to match the input data.

Configuration of the report generator is separate. If the generator expects LATIN-1 encoded data, you should make sure that NLS_LANG is set to .WE8ISO8859P1 -- this assumes that this is an OCI client that does no special NLS settings itself. Otherwise, further investigation may be necessary.

Note that LATIN-1 non-ASCII characters, such as the German sharp-s, expand in conversion from one byte to two bytes each. You must make sure the target column is large enough for the converted value.

If your database is UTF-8, i.e. SELECT value FROM nls_database_parameters WHERE parameter = 'NLS_CHARACTERSET'; returns AL32UTF8 (or the similar UTF8), then data in VARCHAR2/CHAR/LONG columns must be encoded in UTF-8. This is the whole sense of the database character set declaration. What you need to assure is that this data, when selected by the reporting program, is converted back to Latin-1. This can be achieved by setting NLS_LANG to .WE8ISO8859P1 in the environment of the reporting program. If this does not help, we need to take a closer look at this program. What Oracle client API it uses, in which programming language it is written, on which platform it runs, etc.