I need some advice on the way stats should be gather after a Database is
build from a export file, I mean i have a approx 300 GB ( data) and 500 gb Index size database
that needs to gather STATS once import is done, including data and indexes,
I used this option and it never got completed in 18 hrs so then i have to stop.

Is there a better option i should use to complete stats on this sized schema in less time.(Schema includes some partition tables and partition indexes also.)

And 2 Question is, In Production What should be the approach for gathering STATS,

Total of 220 tables out of which 13 are range partitions on Date, having monthly partitions,It's an (OLTP+datawarehouse Combination) with constant data insertion in current month partition.
Data insertion / Update happens on current month partition , rarely on older partitions, some around 20 are big transactional tables with some 10-15 million rows which are not partitioned but data gets change daily, and remaining tables are small from 20 rows -200000 rows which very rarely gets updated, What should be my approach on gathering STATS on such schema.

I think i know that,
My 1 Question was for Whole Schema Stats for the First time after import.
2 question your approach gather_table_stats , so you want me to run whole tablestats for 300 mil rows table ?.

Don't take the following as formal advice, but simply as observations.
In my production environment each customer/client has their own schema.
So I have hundred & hundred of "identical" schemas.
Some are very small & some are very, very, very large.
About 6 orders of magnitude difference exists between the smallest & largest.
These schemas are being supported on multiple systems which are greatly over provisioned.
In other words I have hardware resources to spare to a factor of 10 or more.
With this as my background & foundation, I do DBMS_STATS.GATHER_SCHEMA_STATS across all schemas every day.
It takes only 2 - 3 hours to complete each system every day.
FWIW - All systems are running V10.2.0.2
My observation is that very, very,very few tables show LAST_ANALYZED = "yesterday"; which ever date may be yesterday.
I have tables with LAST_ANALYZED being many month ago.

If I have the time in the near term, I'll try to better understand the distribution of LAST_ANALYZED date.

I can not suggest you do as I have done.
However, I know that DBMS_STATS.GATHER_SCHEMA_STATS does not
really go "active" for every table every day.

We moved onto these systems a year ago which is why Nov. 2006 has the highest count.
My point is that even when invoking DBMS_STATS.GATHER_SCHEMA_STATS daily, most tables do NOT have their statistics actually updated.
A majority of the tables had their statistics collected when they 1st arrived on this system & have not been updated since.
There are between 800 & 900 customer schemas on this system & each schema contains about 50 tables.
So what this says to me is that only about 5 tables per schema are active enough to have fresh statistics for this month,2007-10