Hard to tell with all that unformatted code, however it looks like you've created a function based index, yet the code in that function based index doesn't match any of the conditions in the update statement, so of course the CBO will not choose to use it.

The Optimizer is estimating 60M rows will match the 2 main filter conditions. We don't know how many rows in total there are in this table, but 60 million might be a high percentage of that.

The other table is showing as having 145M rows in it. For a join to a 60M data set an index full scan and a hash join make sense from a cost and efficency perspective. A nested loop would be far more expensive.

What makes you think that index was going to speed things up? For starters it's a function based index that you don't reference atall.

How big is this table? Oracle has costed it as 18 million and it thinks you're returning 60 m rows out of it, that's a cost of about 3 per row. Does that add up to what you think it should be?

Is this a statement that you will run once and forget about? Does it really matter how long it takes?

Your hint is being "ignored" because it is not valid for Oracle to use the index

Try rewriting: (that is not to say that using the index is faster than the full table scan, you've not given any information to suggest this)

Update /*+ index(h IDX_REF_UHC_COS_PHYS_CLMS_07 )*/ ref_uhc_cos_phys_clms h
Set h.Claim_Status = 'ABC'
Where CASE WHEN CLM_STAT_DT IS NULL THEN CLAIM_STATUS END in ('XYZ','LMO')
and not exists (Select 1
From REF_UHC_COSMOS_MEMBER M
where M.MBR_SYS_ID = CASE WHEN H.CLM_STAT_DT IS NULL THEN H.MBR_SYS_ID END
)

Note how I'm using the expressions within the function based index to search for the data. I'm not quite sure what the last column on the index was for but if it was made specifically for this plan it can probably be dropped.

Index on REF_UHC_COSMOS_MEMBER(MBR_SYS_ID) already exists. And index on ref_uhc_cos_phys_clms(Claim_Status,Clm_Stat_Dt) is sub-optimal. It would mean index 1) range scan for validating Claim_Status in ('XYZ','LMO') and h.Clm_Stat_Dt is null, but then table access by rowid to get MBR_SYS_ID and using index 2) for validating NOT EXISTS. Index I suggested on ref_uhc_cos_phys_clms(Claim_Status,Clm_Stat_Dt,MBR_SYS_ID) would eliminate table access by rowid for validating NOT EXISTS.

Also, am yet to create the Index you suggested since we have very less resources I need some time. Once the load on server is low I would be creating the Index. As this process goes for a FTS it takes a lot of time to create one.

Also, am yet to create the Index you suggested since we have very less resources I need some time. Once the load on server is low I would be creating the Index. As this process goes for a FTS it takes a lot of time to create one.

Thanks for you help!

Please answer my previous questions, those queries against user_tables do not explain the full situation. By repeating the stats that the optimizer has to us, all we can do is come up with the same plan the optimizer did, which definitely looks like a full table scan is the best option. If you were to tell us about the distribution with the filters and how big the tables are then we can help you to tell the optimizer. The more the optimizer knows, the better your plan will be. The more you tell us, the better ideas we can get to help you.

How big is the ref_uhc_cos_phys_clms table? (How many rows? How many blocks?)

How many rows do you expect your query to return?

For each of the filters that you are applying to ref_uhc_cos_phys_clms in each combination, how many rows do you expect to get back? (e.g. there are x rows with Claim_Status in ('XYZ','LMO') and y rows with Clm_Stat_Dt is null, all x rows also satisfy y.. etc, ) Also consider the not exists filter here (e.g. there'll only be a few rows in the table that don't have Mbr_Sys_Id existing in REF_UHC_COSMOS_MEMBER)

Is parallelism an option? (Are there enough resources to allow it ? I am definitely not suggesting it)

Is this a statement that you will run once and forget about? Does it really matter how long it takes?

I would NOT expect INDEX to be used when updating 37M rows of a table containing 60M rows

sol.beach - Actually the full table scan is filtering and returning 60M rows from the almost 600M rows in the table. It's a minor thing, but I thought I'd just be clear about it. The OP posted recently that the row count for the CLMS table was 589,053,467. The execution plan is telling us the number of rows in the data set returned by the full table scan after the filter has been applied, not the row count in the table itself.

I would still expect the Optimizer to choose a full table scan for 10% of the rows in a table though, so the outcome hasn't changed in any significant way. In this case a full table scan will probably always be the "lowest cost" access method given the high percentage of rows being processed. And the OP has made it clear that this table gets emptied and then reloaded and reprocessed each month. So it is always updating "most" of the data in the table each time it runs.

The table is appended each month and we are only trying to process ( update the rows that were loaded newly ). We are not touching the older ones. The client requires us to keep the retro claims for future reference.