Do I need to increase our redo size from 250Mb to 500Mb? Which is beneficial to updating a DG standby database? smaller redo or larger ones?

I understand that db performed better if you have larger redo?

What is the version and protection mode you are in?

Oracle suggest to have 4-5 log switches per hour and as per the log history only 2-3 times it was exceeded, You have to check is there any jobs are running at that time and caused much log switches.

BTW, If you have 250mb and having 10 log switches an hour, then if you increase to 500mb the log switches per hour may fall to 5 count.(Rough calculation)

Moreover archive log suggestion is not on standby, You have to concentrate performance on primary database and then whatever size you have created from online redo logfiles and the same amount of standby redo logs have to be created on standby,

If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance.

*What is the version and protection mode you are in?Which v$view can I query this info?

*Oracle suggest to have 4-5 log switches per hour. Our busiest time is 12am-1am with average of 27 logs per hour. Do I compute the size of log for this peak hour only or for the entire 24-hour average. because other time is low. If I have to compute for the peak-hour, so to reduce it to 5 logs I need to increase the size of redo to 1.2Gb, is this viable?

*Moreover archive log suggestion is not on standby. Yes I know, I just made a typo.

*If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance. This balance thing is the most oblivious or a grey area. you can not measure which is which. Can you suggest which is balance for our database based on the info above?

*What is the version and protection mode you are in?Which v$view can I query this info?

Rp?

You can check from "select protection_mode from v$database"

*Oracle suggest to have 4-5 log switches per hour. Our busiest time is 12am-1am with average of 27 logs per hour. Do I compute the size of log for this peak hour only or for the entire 24-hour average. because other time is low. If I have to compute for the peak-hour, so to reduce it to 5 logs I need to increase the size of redo to 1.2Gb, is this viable?

Ok certainly you can consider to increase but is it really causing any performance issue and reaching bottleneck? But ensure you using real time apply(Creating standby redo logs + RTA) in case to avoid huge data loss so that every commited transaction will be send to the standby.

*If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance. This balance thing is the most oblivious or a grey area. you can not measure which is which. Can you suggest which is balance for our database based on the info above?

You can calculate bandwidth using below calculation that is only for Speed to sent redo to standby.

Ok certainly you can consider to increase but is it really causing any performance issue and reaching bottleneck? But ensure you using real time apply(Creating standby redo logs + RTA) in case to avoid huge data loss so that every commited transaction will be send to the standby.

What if we have bottleneck in bandwidth?

We have max of 34 logs per hour. I computed it as 34*250Mb*8/3600=18.9Mbps (this is in bits not bytes hence I multiply by 8)

How do I get the redo rate per seconds? Based on the 34 max logs per hour?

From AWR report, if you go to load profile you can see redo size per second, based on that you have to calcualte with the method (Required bandwidth (Mbps) = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000)

Redo size will be different to each second, minute, each database and it is completely depend on your database and you must gather data from peak hours.

Some times redo generation may be high and some times less or more less, If it is from peak hour AWR report then the reso size per second is 2,108,239.3 and the required bandwidth should be around 24 Mbps. This is only for Data Guard and there is no rule that you must have this network and it is the required speed to avoid any disconnects between location or avoid latency.

Whatever the redo size, you can use the formula and of course its correct and provided by Oracle and not by me

You can also contact with network admins to know present network bandwidth between two sites.

I feel question is going beyond, actual question was on the log switches per hour and now we are talking about the bandwidth and so on when in case of maximum protection/availability.

What i want to say is

1) If you are using maximum performance and you have average good speed then no need to worry at all

2) in case of maximum protection or availability then you must concentrate on bandwidth to ensure zero data loss

But you are using maximum performance and there are no issues reported from you that your standby is always lagging or so on. Moreover you have to concentrate on the primary performance what is really causing, is it waiting for the archiving ?