Neo X wrote:
I recently migrated my code from the native version of ODP.NET to the managed version, and all of a sudden, accessing the SafeMapping property on OracleDataAdapter gives the following exception:

I've migrated from native ODP.NET, therefore have a call to SafeMapping in my code. Yes, I know it's not supported, but I don't want to just remove that line of code without understanding the implications of doing so. Hence why I asked if it is not required anymore, or am I doing something wrong? Such as, is there now an alternative way of doing what SafeMapping did?

Sorry, you stated "...and all of a sudden, accessing the SafeMapping property on OracleDataAdapter gives the following exception..." then asked "Or, is there something I'm doing wrong?" and I was just pointing out that the SafeMapping property is not (at least currently) implemented in the managed provider.

Yes, I know it's not supported, but I don't want to just remove that line of code without understanding the implications of doing so.

If you have Oracle values that exceed the precision or size of their .NET counterparts then you could lose data/precision or get an overflow exception when converting to the .NET type.

You could spin up a quick test that selects a valid Oracle value that is larger than the .NET counterpart for example:

If you have Oracle values that exceed the precision or size of their .NET counterparts then you could lose data/precision or get an overflow exception when converting to the .NET type.

You could spin up a quick test that selects a valid Oracle value that is larger than the .NET counterpart for example:

OK, I understand the purpose of SafeMapping. I'm really trying to understand if it's still required in Managed ODP.NET. As it's not supported, and if it's the case that values come back from the database that are not compatible with their .NET counterparts, then how do you manage this in Managed ODP.NET?

Data type mapping is still required if you use managed or unmanaged provider. The fundamental data type differences between the .NET Framework and Oracle DB has not changed. The data provider will always need to deal with it.

One solution is to use ODP.NET data types within your DataSet. You can do this by setting OracleDataAdapter.ReturnProviderSpecificTypes to true. That's probably the easiest solution to avoid having to workaround errors caused by data size. It will require some code changes, such as commenting out any safe type mapping calls.

We hope to support safe type mapping in a future release. It won't be in the first managed provider production release.

Awesome, that works. I've not checked the numbers I get back yet, but the obvious question is, why use SafeMapping if you can just set this property to true - it seems a much easier solution. Is there a trade-off, or are there circumstances where this solution won't work and SafeMappingmust be used?

The trade off depends on whether you have a need to use a specific type system: ODP.NET types of .NET types. The scalars don't differ terribly between the two. ODP.NET types provide more precision, which is why you need Safe Type Mapping in the first place. The complex types, such as LOBs, UDTs, Ref Cursors have unique performance and capability differences compared to standard .NET types.

You can still use .NET types without Sage Type Mapping. You'll just have to programmatically truncate the precision before filling the DataSet. .NET's default behavior is to throw an error if you try to insert a data value too large for the .NET data type.

I see. Well, this is mainly for double or float types in .NET. So, in other words, for me, setting ReturnProviderSpecificTypes to true before calling OracleDataAdapter.Fill() will effectively truncate the precision of field values in my SELECT query before populating the .NET types in my DataSet ?

OK, makes sense. Thanks for your help. I was getting an overflow exception on a numeric field when calling Fill() after I removed SafeMapping. But it was fixed when I used RPST. Hopefully, the numbers are still the same.