My streams enviroment works pefect between two 11g isntances, but my concern is about its efficiency.
I think i have configured my enviroment well. Used streams advisors for pools - even give more than needed.

streams pool about 300M on source and destination.

aq_tm_processes=1

and now is my concerns:
My test is only for test purposes so please don't yell at me.
For test purposes I am replicating one table with 15000 rows. First column is datetime and is refresed every 5 minutes with sysdate.
After 20-30 minutes my streams replication hold and capture process have 'Paused for Flow Control' status.

this situation happens alvays. now I am updating only 1000 rows with sysdate - will see what happen.

my question. it is possible that streams environments won't be abble properly propagate 15000 records (commited after last one)?
i was sure that it won't be any poblems with this and now... such surprise.

John WatsonMessages: 6581Registered: January 2010 Location: Global Village

Senior Member

Unlike some people, I think the Streams is a pretty good product. But it does need careful setup and can be horribly complicated. Have you gone through note 746247.1, which is the starting point for this sort of issue?

Unlike some people, I think the Streams is a pretty good product. But it does need careful setup and can be horribly complicated. Have you gone through note 746247.1, which is the starting point for this sort of issue?

no, but thanks John. i will read it.

i narrowed down the amount of records which are updated periodically to 1000 - the same situation. streams was working for about 30 minutes and stopped with the same symptomps like above.
now i have 10 records updated per transaction every 5 minutes - still work.

but nevermind. i suspected that 1000 rows per transation won't be to much for streams enviromnent. but it suprprised me not well.

Streams is good in OLTP environment (even with complex changes) when you insert/update/delete rows one by one NOT for bulk modifications.

Regards
Michel

thans very surprising me because I thought that streams is perfect for example for ETL processes, which are mostly consistent of very hudge transactions to be propagated. i read this in one of the technical book. but maybe book was not proper.

update:
at this moment i am updating 10 records per 5 seconds - everythink works perfectly.

now i am doing update 15000 rows on column with datetime (like in the first my post) but with one transaction per one record.
EVERYHING WORKS FINE!
the problem is and was with large transactions (over 1000 records per one transactions) - it supposed to be bottleneck for streams.

but back to my doubts that streams is an solution for replication warehouse data. i read this in "Oracle 11g R1R2 Real Application Clusters Essentials"

===================
Streams is not a true disaster recovery solution or high availability option, but more
of a complementary solution to enhance the availability options provided by Oracle
Data Guard and Oracle RAC technologies. One of the most common ways to use this
technology is with large Oracle data warehouses and data marts to replicate a subset
of the source data to another environment for testing and verification purposes.
=================

and as i know warehouses consist of tons of data DMLs.
but this description is a little confusing for me because of my problems with "large" transactions (15000 dml per transaction) in my streams enviroment.

Genuine datawarehouse databases are filled by OLTP ones, there are tons of DMLs but single row ones, no problem for Streams in this case.
Ot can support large transactions as long as each statement is single row one.
The problem is not on transaction size, it is on DML one.

...
Ot can support large transactions as long as each statement is single row one.
...

thats an answer for my problems - thanks. it looks like this was causing problem.

but what will you do if (for example) 99% of your DML is "single row" but once a week, there is one big update touching (lets say) 200.000 rows of the table which needs to be replicated througt streams. and you have no posibility to modify this problematic query because this query is included into software which is no longer supported by your vendor. No one is able to rewrite parts of it's code.

streams probably fails with this case.

we could do triggers on this table and propagate this changes to another table using "single row update" which could be propagated further instead of the first one. it could be one solution.
... but propatagion of this first one table is not possible usign streams?

John WatsonMessages: 6581Registered: January 2010 Location: Global Village

Senior Member

Hello again. You seem to be hung up on the idea that "Streams doesn't work for a large update statement". But your only evidence is that you had a problem with flow control, which you have not tried to fix. There are many articles on metalink regarding how to tune Streams for this.

Hello again. You seem to be hung up on the idea that "Streams doesn't work for a large update statement". But your only evidence is that you had a problem with flow control, which you have not tried to fix. There are many articles on metalink regarding how to tune Streams for this.

i will do again some research and try correction of my problems with 'flow control' in the next week.

i tryed use advisors, read articles, tune my streams but nothing helped so far, so i suspected that maybe streams is not able deal with 15000 multi row updates.

... so starting this discussion i wanted to know your opinion that maybe this problem is not possible to overcome. if there is still chance for me to force streams propagate 15000 multi row updates per one transaction - i wil try it next week again.

I did it in real in several clients (in car and finance entreprises), and I can say that it indeed does not work with large DML; even the best Oracle experts could not tune Streams to make it work. Every week flows are on error with a backlog of 2 days per day (it lasts 2 days to apply one day of redo records). (Without speaking about the increasing load in the source database that penalized the source clients and which was partly solved using a remote capture process in the next version.)
So no, it does not work with large DML in real world.
Know what, the client chooses to use Sybase Replication Server instead and now investigates Golden Gate.