The Sequential Storage Option (aka Random Write Accelerator) is a new feature that comes with PSP1 for SANsymphony-V 10. The name itself already tells you what this feature could be used for but I will better explain a bit more about this main feature.

First to give you an overview, this feature will dramatically increase random write performance on every vdisk you enable this option assumed you really have random write I/O patterns. This feature will not speed up sequential writes or reads in any way, it is only designed to speed up random I/Os.

Why is this feature needed? During the last years storage capacity dramatically increased. On the other hand, speed didn't raise at the same level so today we have multi-terabyte SATA disks that offer the same speed as their pendants 8 years ago. Things get worse with the introduction of magnetic-shielded recording that allows higher capacities (currently up to 10TB on a single drive) but also increases write latency up to 6x. Even in the SAS area capacity raised but performance didn't. Only the change from 3,5" to 2,5" speeded up track-to-track seek performance a bit.

Why is seek performance so important? Seek performance is responsible for latency and as seek performance gets faster, latency goes down. And storage is all about latency....

So, current problem is huge disks and slow performance thus high latencies. On the other side, capacity needs to be increased as the demand for more and more store is unstoppable. So your DataCore environment probably has high amounts of capacity but performance lacks due to the quite small number of high capacity disks. This will keep costs low but also performance.

So how can we fill the gap between the high capacity, slow performance disk drives and the high performance, high cost drives in an efficient way? To find an answer to this question you have to find out how you can make slow SATA disks deliver the same performance as high-speed disks. Sound irrealistic.... but DataCore is one step further.

If you look at the weaks and strength of a high capacity SATA disk then random performance is slow but sequential performance is quite good. Sometimes SATA disks can even outperform SAS disks in sequential streams. If you look at the way storage is addressed in DataCore then you see that most reads and writes are random. Not a good starting point for the use of SATA disks. For reads you can use the DataCore cache to speed up things but for writes, even with cache, the data has to be destaged to the disks and that's where things get really slow.

Sequential Storage is a feature to address exactly that issue. Sequential Storage Option (SSO) more or less, transforms random write patterns into sequential only streams that can be perfectly served by even SATA disks. This way you can speed up random writes up to 33x. DataCore internal testings with 100% random 4k writes showed in combination with SSY-Vs write cache a sustained performance on a SINGLE SATA disk up to 10.000 IOPS. This is the performance of an entry level SSD. Peak performance was even faster. And even SSDs can profit for a performance gain up to 3x.

One more note on the mentioned performance gains. DataCore distinguishes between three classes within SSO. First there is "Minimum performance". This is the absolut minimum of performance you get from a single disk by using all DataCore optimization features EXCEPT SSO. Second there is "Maximum or peak performance". Peak performance can be reached right after the enablement of SSO on a vDisk. During this phase writes will get put into the index in a very fast and efficient way. Garbage collection is not neccessary at this phase making it very fast and efficient. Peak performance phase can last for a few minutes up to several hours depending on the I/O pattern. Third is "Sustained performance". This is the performance you see after peak phase ended and garbage collection needs to be done in the index table. This reduces the write performance but still keeping it at a level up to 10-25x faster than without SSO.

How can SSO accomplish that? To understand this you have to dig a bit deeper in the way SSO works.

SSO puts an extra layer of virtualization into the data path. SSY-V already virtualises underlying storage but in the past (and also today if you disable SSO on a vdisk) the addressing scheme still uses LBA and offset. This makes random writes on the frontend result in random writes in the backend. Optimization was only made through cache and coalesce of unneeded writes but the I/O pattern in the backend matches the pattern on the frontend. As most users use server virtualization in front of a DataCore SAN random traffic on the frontend makes ~90-95% resulting in heavily random access on the storage backend. No good place for SATA disks.

SSO now also virtualises the addressing scheme making it possible to "transform" random I/O to sequential I/O. To accomplish that, all write I/O is not automatically destaged to disk (the so called data rest area, where data is unoptimized stored) but rather stored on a log partition in some kind of "eternal table". As this table allows the writes to be stored sequentially the underlying disk only does sequential writes. The exact techique behind SSO is a bit hard to describe but the important thing to remember is that SSO makes sequential I/O out of random I/O fully transparent to the application servers.

Some more facts about SSO you should be aware of:

SSO needs additional disk space to store the log partition. That's why a vdisk that is sized to 100GB can grow beyond this point. To be specific, the normal "overhead" of a SSO enabled vDisk is 4-6 times the configured size. So if you have a 100GB vDisk, load that disk completely with data and enable SSO you can be pretty sure your vDisk's allcated size on the storage system can grow up to 600GB. The additional storage is only for the log. So to cut a long story short, you pay the price for SSO with the requirement of a lot more storage. No problem if you have plenty of unallocated space in your environment. If you don't you should rather do without SSO. But adding some high capacity SATA disks and profit from SSO is much cheaper than buying SSDs or even all-flash arrays.

The limit for the vDisk growth is 900GB, so if your log gets bigger than 900GB the data will automatically destaged to disk thus reducing the amount of needed space to the amount of data really stored on that vDisk. Additionally SSO will get automatically disabled.

Currently there is no monitoring of the SSO driver for capacity constrains so you have to keep yourself an open eye on storage allocation if you enable SSO.

SSO is an "already included" feature. You don't have to license it separately. All you have to do to use SSO is installing PSP1 and reactivate your keys.

SSO needs RAM and CPU so make sure you have Non paged pool memory (this is the amount of memory that is available to DataCore after the OS and the DCS caches grap their memory) and a new generation CPU (max 1-2 years old). For the NPP memory you can simply add more physical memory to your DCS or reduce the amount of memory used by DataCore for caching purpose. A ruke of thumb is for every GB log space created by SSO you should reserve 5MB of NPP but a general limit exists for NPP. This limit is at 128GB no matter which Windows OS you use. So use SSO wisely.

SSO needs no adaption of the background physical storage system. The only option to increase performance is to enable optimization for sequential writes on your storage system (if this is supported by your storage).

The SAU size doesn't matter so no need to change it.

In combination with async replication keep in mind that writes need to be replicated. If you speed up writes that dramatically you probably get problems to transfer your changed blocks fast enough.

You can enable SSO currently only if the vDisk is unserved from any host. Disabling SSO can also be done in served state.

If you disable SSO all data in the log partition will get destaged to the data rest area. This can take some time and cause a severe amount of I/Os.

SSO and CDP can't be activated simultaneously on the same vDisk.

If you want to use SSO on a newly created disk first copy all needed data to the vDisk and after that enable SSO. If you do it the other way, all your data will be kept in the log causing a huge blow up of allocated storage. Additionally no data at all will be located in the data rest area and has to copied there if you disable SSO.

If your DCS crashes the index hold in NPP memory is lost and will get rebuild after the restart so SSO enhanced storage is to be considered as safe as non SSO enhanced storage.

SSO will not speed up things if your backend storage is already at a high saturation level. As SSO needs additional ressources it could make things worse. On the other hand after enabling SSO the load could be reduced because of the now mainly sequential patterns. But still the recommendation is to test SSO on only a few vDisks if you suspect your storage to be already at it's limit.

The last words: SSO is a cool feature, it is free of charge and it has only very few side effects (okay, 6x the storage could be a problem for some customers but the performance increase should be worth the invest in a few TB of low-cost storage). It can only speed up things in a specific I/O pattern but it won't make things worse even if this pattern doesn't apply to your vDisk.