Then consider the alternative join possibilities and the rowcounts they might produce. Oracle has done its job in trying to keep intermediary rowset sizes as small as possible. I cannot do the research myself since I do not have the data to work with but you can run some queries to test things out. There are only three tables and some list of columns you did not specify. But the combination of rows and row length will tell you which join order keeps the size of intermediary rowsets smallest.

What version of Oracle are you on? If it is 11g, use the LEADING() hint to force a join order and see what rowcounts and intermediary sizes pop out. If not then write the WHERE CLAUSE with the order noted and used the ORDERE hint instead. Post the resulting plans for us please.

The query plan tells you after each step, how many rows result, how many bytes these rows consume (sum of average row width), and how much temp space is needed to complete the operation (these are all guesses of course unless you are using ACTUALS).

Do you mean the "cost" figure in the plan was higher than in the one you got by default? If so, yes it probably will be! It's a cost-based optimiser - it is designed to cost a range of possible execution plans and pick the one that comes out cheapest, so unless your new hinted version is something it never considered, then by definition it must have a higher cost than the default plan.

Can you check please to see if you cut the end off each plan. Looks like the tempspace numbers of these were chopped off. For example, I find it hard to believe that the hash join that spits out 166 million rows and takes up 17GB of space requires only 99 bytes of temp space. We need to see the full plan lines to evalute.

1) we can see that in all the plans, the table scans all return the same number of rows for each table. No matter which plan we are looking at, we get these counts.

TABLE A --> 13M
TABLE B --> 8543
TABLE C --> 1489k

This is correct in that the ROWS column shows the number of rows returned from a step and since each of this is an "elemental" step so to speak meaning each table scan is a plan step with no children below it (a true leaf node), ROWS should hold the count of rows from scanning the table AFTER FILTERING. This is not the rowcount on the table, it is the rowcount after any WHERE CLAUSE FILTERING has been applied.

2) also, we can see that the BYTES column is just as consistent as the ROWS column. BYTES tells us the number of bytes required to hold the intermediary rowset for a step. So for each of these table scan steps this tells us the number of bytes needed to hold the data from all rows of the table AFTER PROJECTION which means after the optimizer throws away all the columns it does not need for the query. Since we expect the query plan to need the same columns from each tables regardless of the query plan variation we look at, this number is the same for each of the leaf node table scans. This too is as it should be. I am not sure if there are situations where Oracle is smart enough to throw away columns that were needed for a join but not in the final result set so I will test that later. For now it is enough to understand that the BYTES column is the size of the resulting intermediary rowset after the step is complete. This rowset is composed of rows that passed filtering and contains only those columns needed to do the query.

3) HOWEVER, we can see that the ROWS column and the BYTES column are different for the JOIN STEPS in the plan (HASH JOINS for this query). For example, let us look at line 3 for each plan. In all six plan variations this is the first join in the query. I have noted which pair of tables has been joined and the order they were joined in.

a) regardless of the order in which we join the tables, the ROWS and BYTES is the same so A,B = B,A. They both yeild ROWS = 787K and BYTES = 119M. Same with the other variations.

b) BUT... DIFFERENT pairs of tables being joined produce different intermediary rowcounts and different intermediary rowset sizes. This of course should also be expected since it A,B != A,C from a join perspective (unless B = C).

This is all actually quite obvious. But is does point out that the cost of different joins is evident first in the number of rows it produces and second in the size of the rowset that is produced and also possibly in the number of hash probes done when doing the actual row joins.

4) we also can see a difference in the cost. Let us change the ordering of line 3 so that the same table pairs are back to back.

See here how the ROWS and BYTES are the same for equivelant table pairs, but that the TEMPSPC needed is different. This is normal and is a reflection of how HASH JOIN works. Since one of the rowsets feeding into the hash join must be pushed to memory, the cost to join can in several ways. This has to do with:

a) if the in-memory table will not fit in memory and thus must spill to disk

b) if spilling to disk is necessary, how much spills to disk and thus how many join passes are need to complete the join

c) how many total join attempts must be done (e.g. how many probes of the hashed table are needed)

From these six plan variations we can surmize that TABLE B will fit in memory and that it is the only table that will fit into memory. That is why the two join pairs that push TABLE B into memory (B,A and B,C) do not require tempspc in order to do the join. These joins are OPTIMAL (as vs. 1-pass or multi-pass). If the result of these joins is itself too big to fit into memory then these rows will be written to tempspc but that will be charged off to the next join operation.

So looking at these joins we see that the B,A join looks to be the most efficient. It keeps the intermediary rowsets smallest and requires the least amount of tempspc. Since there is only one table left C, the best join order is B,A,C. You will note that this is the same join order that the OP posted originally. The OP already has what looks to be the best plan.

5) one last thing we should point out is that although we see that the best join order is B,A,C; and this is the same join order as was original provided by the OP, the OP's original query plan is not the same as the plan shown here. The OP posted this:

Notice that line #4 is not a TABLE SCAN. This is not necessarily the fault of the OP. There are several possibilities as to why the plan given in our six plan group is different that the original plan posted for the B,A,C join order.

a) the OP was looking at different tables

b) there was an index that was dropped

c) EXPLAIN PLAN is by it nature a guess based on current session settings, not the actual plan used. The OP originally posted the actual plan used because the OP used dbms_xplan.display which I am prett sure provides the plan for the last query executed and is thus the actual plan used.

So this is may be a case of EXPLAIN PLAN not giving the actual plan. Neat. Or maybe the OP can provide some other explanation.

In any event this is what I wanted people to see, that plan costs are driven by various factors and the OP had the best plan to begin with, at least based on the CARDINALITY and SPACE requirements of EXPECTED INTERMEDIARY ROWSETS.

Plans are affected by environment settings. I could easily envision that PL/SQL DEVELOPER has changed environment settings and thus the plans can be affected accordingly. If there is a v$ses_optimizer_env view you can look at, figure out the sid of your sqlpus and developer sessions and see what is different. You will likely find one or more optimizer settings differ between the two. You can then adjust your sqlplus session to match your developer session or otherwise, and see if that makes things the same.

Please specify only the columns you need really in the select clause. If those columns are part of indexes involved in joins, it should results in some tables scans being replaced by index scans, which is much quicker.

1. You don't need any dynamic sql here. It's just unnecessary consumes CPU (and probably performs unnecessary soft parses). Change it to embedded sql.
2. You original query does not have any indexable predicates (except joins), co optimizer has to match all rows of all tables and it rightfully decides to do it
by HASH join and full table scans.
3. Post TKPROF - it's impossible to know where your proc is spending it's time.

These all seem to use the same basic rowset as input. Additionally upon examination it looks like these three updates each update three different sets of rows. Even if there is overlap among them I don't think that matters because the data that is being updated is not related to the database being fetched. Thus update#1 does not affect the set of rows updated by update#2 and so forth.

with that in mind I would try to combine the three updates into one using case. Consider these lines addes to the query:

,case when a.Item_Cost_Amt is null then 1 else 0 end item_id_null
,case when c.Unitssold > c.Markdownunitssold And a.Rglr_Unit_Price_Amt Is Null then 1 else 0 end units_gt_markdown
,case when c.Unitssold <= c.Markdownunitssold And a.Rglr_Unit_Price_Amt Is Null then 1 else 0 end units_lt_markdown

and (
a.item_cost_amnt is null or
c.Unitssold > c.Markdownunitssold And a.Rglr_Unit_Price_Amt Is Null or
c.Unitssold <= c.Markdownunitssold And a.Rglr_Unit_Price_Amt Is Null
)

If you add these three columns to your base query and this where clause predicate, then you can query the data once and do a form of conditional update; something like this maybe: