1)Do we need to take any pre backups before we do this (Take full DB backup)? But again i will have to take it on ASM only since i dont have any other storage on this server :-)
Will this have any impact on data?
2)Is it better if we shutdown all instances running on this server while we do this? Will it help to improve the speed of data migration to new luns?
3)Because you said this can be done 100% online. So while doing this, will there be some performance or slowness issue in DB transactions?

1)Do we need to take any pre backups before we do this (Take full DB backup)? But again i will have to take it on ASM only since i dont have any other storage on this server :-)

How comfortable are you doing ASM maintenance without having a backup in case you do some mistake? Maybe it's better to be safe than sorry. Using the same storage media for your database and backup is not a good design. You should at least maintain some external backup to tape or USB.

Will this have any impact on data?

Certainly.

2)Is it better if we shutdown all instances running on this server while we do this? Will it help to improve the speed of data migration to new luns?

Yes. You can also copy database files while the database is offline using the ASMCMD cp command. It allows you to copy files between Oracle ASM disk groups and the operating system.

3)Because you said this can be done 100% online. So while doing this, will there be some performance or slowness issue in DB transactions?

Additional I/O and data processing will slow down performance. Whether it will be an issue cannot be determined.

user 777111 wrote:
1)Do we need to take any pre backups before we do this (Take full DB backup)? But again i will have to take it on ASM only since i dont have any other storage on this server :-)
Will this have any impact on data?

Have swapped not only LUNs, but one storage architecture with anotther, using ASM and the above approach. Multiple times. Different clusters. Felt no need for backups. Each time it was done with database(s) online and doing normal processing.

2)Is it better if we shutdown all instances running on this server while we do this? Will it help to improve the speed of data migration to new luns?

Not really.

3)Because you said this can be done 100% online. So while doing this, will there be some performance or slowness issue in DB transactions?

As you can set the "aggressiveness" of the load balance operation, you can control the impact on overall I/O performance.

This method as described above, is seamless and fully transparent to the database. I see no reason to touch the database and disable/shut it down during this period.

A very large telco I worked for did this under 10g ASM swapping out 250TB ASM on EMC storage for NetApp Filer storage. They were the first to ever try it on this scale and Oracle was not certain of the outcome. Took 27 days. This was on a 2-node Sun 6900 cluster and moved data at a rate of approx 300G/hr. while adding > 1TB of data per day without a noticeable performance degradation. I would never attempt the all-at-once method on Linux as the I/O channels just cannot sustain the I/O rates. Just adding 2 devices on Linux can bring the database performance to its knees.

The largest swap I did was from a home grown storage array back to EMC - but there were 2 distinct I/O fabric layers (Infiniband on the one hand and fibre channels on the other), so I/O capacity was not an issue. Ran the rebalance at full power.

Would not be so quick to blame the o/s for not being able to sustain I/O rates. It all depends on the I/O fabric layer and how it is configured. The problem as I see it is that Linux is used by those from a Windows background as their new server o/s - and approached from a Windows sysadmin and configuration background and perspective. Which inevitably means some kind of screwup somewhere.

BTW, if I may ask, how was the NetApp Filer performance? What protocol was used, and was the network layer dedicated? I still find NetApp Filer architecture a hard sell as I do not believe that IP is suited as an I/O fabric layer. Looking for reasons to challenge my views on this and rethink it. :-)

The TCP/IP protocol imposes some overhead, which I think is why jumbo frames were invented. Each received Ethernet frame needs to be processed by the network hardware and software. TCP/IP is not the best protocol available to maintain most efficient and highest possible I/O rates, in particular where low latency is essential.

As a medium to share and exchange data, e.g. fileserver, considering how fast network devices have become, I see a clear advantage of TCP/IP implementations because of convenience, portability and compatibility.

I'm running a Synology DS1812+ at home with 21 TB redundant storage. It's based on Linux busybox. I use it as a file server, backup and multimedia server with several computer systems at home. I'm having a cheap 1 GBit network with a $30 switch (without link aggregation LACP) and achieve file copy rates of about 115 MB/s (400 GB/h) when transferring large files using Apple AFP over TCP/IP, measured with a stopwatch. I don't think anything could possibly be faster or more reliable and versatile. It's the coolest thing I ever owned with a nice web admin interface. The amount of features, user friendliness and performance it provides are simply amazing. It does everything.

Dude wrote:
The TCP/IP protocol imposes some overhead, which I think is why jumbo frames were invented. Each received Ethernet frame needs to be processed by the network hardware and software. TCP/IP is not the best protocol available to maintain most efficient and highest possible I/O rates, in particular where low latency is essential.

Jumbo frame implementations are few and far in between in my experience.

The other two problems are equally concerning. IP networks are used as shared medium. This can kill your IP based I/O fabric layer. Then there is TCP - IMO the most unsuitable choice as an I/O protocol in the IP suite.

As a medium to share and exchange data, e.g. fileserver, considering how fast network devices have become, I see a clear advantage of TCP/IP implementations because of convenience, portability and compatibility.

Agree with the points on convenience and so on. But from a technical perspective, I find that a poor solution. Unless the fabric layer is a dedicated and private IP network, private switches, high speed (10Gb+), jumbo frames, SRP/FCoE/etc are used as protocol, etc.

I'm running a Synology DS1812+ at home with 21 TB redundant storage. It's based on Linux busybox. I use it as a file server, backup and multimedia server with several computer systems at home. I'm having a cheap 1 GBit network with a $30 switch (without link aggregation LACP) and achieve file copy rates of about 115 MB/s (400 GB/h) when transferring large files using Apple AFP over TCP/IP, measured with a stopwatch. I don't think anything could possibly be faster or more reliable and versatile. It's the coolest thing I ever owned with a nice web admin interface. The amount of features, user friendliness and performance it provides are simply amazing. It does everything.

The versatility and features are not because it is IP based. The same, and more, exist on other architectures and fabric layers too. The issue is that connectivity today means IP and TCP mostly, followed by UDP - and that IMO that however does not mean that it is the most appropriate technology to use as I/O layer.

I was just giving my home setup as an example, which would also be suitable for a small business or office. I could (not really) spend 500'000 USD to buy an EMC, but I found a solution that cost me "only" 2'000 USD and does even more than EMC could ever do in this setup. Most of the versatility and features of the storage box are because of TCP/IP.

Whether or not TCP/IP is an appropriate technology for a storage subsystem really depends on what you use it for and whether your are willing to invest a huge amount of money into a hardware infrastructure that will be obsolete sooner or later, or you may not be able to fully use or even need. Building a TCP/IP network is cheap and easy to manage compared to other dedicated storage networks and the performance can be acceptable and reasonable.

No arguments from me that fibre channel architecture (as used by EMC) is expensive. The price for a single 8Gb fibre channel alone makes me shudder...

I'm all for alternative storage solutions. Just that I do not see use a shared IP based architecture, using TCP as storage protocol, as a viable alternative. I would rather use Infiniband as the fabric layer, with SRP as the storage protocol. And if not cheaper, it would cost around the same as a 10Gb IP architecture. Except that it would be 40Gb and running a wire protocol designed to have minimal latency using 65Kb super jumbo frames.

Don't get sucked into your shoes ;-) IT has a history of rewarding trivial and cheap technologies. Who would have ever thought to use a PC as an enterprise server system 20 years ago? Take USB for example, which was initially designed to handle printers, keyboards and mice? SATA for enterprise hard drives? etc, etc.

There is a lot of info about SCSI over TCP/IP and SCSI over Fibre Channel. I found the following article interesting to read: