For extracting data from Oracle connector stage, you have to use hash
partition only but make sure that table should have hash partition on the
key columns and indexes should be created on the basis of where clause
columns so that fetching will be faster.

In case of Transformer stage , hash partition is also required on the
basis of key columns and do the perform sort for both links including
primary as well as reference link .

As mentioned earlier, use hash partition on the joining keys in the Extract job.
And use the same partition in the Extract job.
Provided you don't change the key in the extract job.
You would need to sort the data based on the keys too.
You can use same partition on Sort stage too, if you gong have it before the join stage.

Thats the concept of left outer join. All key values of right dataset
getting match with left dataset will be populated. Rest of the right
dataset key values will be dropped. If u want to find out all key values of
right dataset which are excluded from joining, use merge stage along with a
reject link.

Hope Colmoss's reply answered your question.
1. Now a question to you. Why do you need to do a Hash partition again in the second job, when you have partitioned the data already?
Recommendation : To use Same partition.

Matching records should not post NULL values. Unless, you have some data type differences / invalid column mapping.
Hope you taking care of Unicode also into picture.
Are the data type for the both input, exactly the same? Length of the field too?

HI Murali,
you can use hash partition in Transformer and same partition in
oracle connector stage.

because, using hash partion in oracle connector stage is strictly
related to the way you are loading the data in to table, means if you
are inserting and updating the rows based on primary key, then you need
to use hash partion, otherwise simply we can use same partition.