I have a materialized view in Oralce that pull data from SQL Server View that is created on lot of views and huge tables that are updated frequently.

Problem: When the schedule tries to refresh it, it delete the current data as it is the function of the materialized view. this delete takes more than 2 hours to complete because there are 9 indexes created on the MV table. And in SQL server the MV query takes 20 Mins to fetch data and 45 minutes more to insert into the Oracle materialized view.
If I drop all 9 indexes the MV refresh takes 1 hours or something.

We cannot force the MV to use truncate because online users need data from this view.

I don't want the MV to spend 2 hours in deletion of the current data in MV. And if I drop the 9 indexes the online users will start complaining about slowness while fetching data from the MV table.

Hi tigerpeng - I am not licensed to use the partiion. As its an extra cost option

Hi delfeld - Its a complete refresh - fast wont work as the data is pulled from MS SQL Server..How can the Materialized view logs keep records changed in tables,views(Lots of Views created on Huge tables) in MS SQL Server database?

MV query is a select 10 to 15 columns from a View in SQL Server database. this view is created on other views(lots of views created on huge tables) and tables joined with LEFT JOIN etc.

Sorry, but why don't you create a new post for you question? So the experts don't abandon my question..

And also in my case I don't think Streams or GoldenGate could work. Because the golden gate works - When the tables are modified in sql server then the data in oracle is modified online. My case is the Oracle needs data from views created on many views on many tables.

When you try to do complete refresh a MV then it deletes all the data and
again insert the data, since it generates lots of archives so it takes a
long time.

My suggestion is, try to drop and recreate the MV, put all index
drop/recreate and MV drop/recreate and object level grant (if any) in a
script in sequential order and if possible schedule this script in
non-business hours, hence all these will be done in required time and in an
automated way and at the same time you will not get any user complaint.

Thats sounds like a good plan we thought about earlier. Becuase the drop and recreate doesnt take more that an hour. The risk here is there are many critical processess(Peoplesoft Processes) that runs recurring and if they fail there are possiblilty of Business Malfunction.

But if we go for dropping the indexes before refresh and recreating it after. Users will face slowness but they will get the data. That could be the best compromise. I think so.

Thank you for all your replies. If anybody knows any tools or tweaks that can help. It will be greatly aprreciated.

If you want to put some time into it, create a manual incremental update:
1. Put database triggers on the master tables in SQL to write log records as records are updated or added or deleted. Include a timestamp.
2. Create a view as per the existing view but based only on the log records, or perhaps a mix of the master table(s) and logs.
3. Create a materialized view on Oracle that is a copy of the log-based view.
4. Change your existing materialized view to a table.
5. At intervals run a process that updates, deletes or adds records on the table based on the details in the new log-based materialized view.
6. Clear down the log tables back on SQL based on the latest timestamp fully applied to Oracle.

Little bit intensive in terms of build but all are relatively simple steps. Gives you an atomic-level update process to Oracle from SQL with no downtime for users and no slowdown due to disabled indexes etc.

How about having a staging materialized view to sync with SQL server. This
MV doesn't need to have indexes, you can drop and create them as you want
and once the data is loaded you can the use rename command to switch the
table names.

That could become more complex as the view has more that 20 tables and many other views created on small small tables. By doing this we solve this issue but create slowness issues in transactions on these tables.