How our company progressed from network attached storage to direct attached storage to increase performance and enhance disaster recovery.

Team Members

Tagged MSPs

Categories

Almost two years ago I took a position with a new company who was looking for an IT guy who could take over the existing network infrastructure from a local service provider and continue to provide the same or better in house service for their growing needs. The service provider had already set them up with a three HP server VMware stack connected to a Netapp storage server running NFS and high availability with an automated onsite and offsite backup solution. For the most part they were already set by the time I got there, all I had to do was take over the helm and stay the course until the tide shifted.

Sure enough about 8 months into my position the tides started to shift. Don’t get me wrong, I had plenty of stuff to do. We were in the initial stages of setting up the company with a new Microsoft Dynamics AX ERP system to take over their current aging product. This was no small task, but as the project and the software developed, so did the resources necessary to make this application perform to our standards. The Netapp was soon showing signs of an ethernet bottleneck, even though there were multiple 1GB nics teamed together to increase bandwidth. I looked at the cost of 10GB nics for the Netapp and almost lost my lunch. Instead, I opted for a rack mounted Qnap NAS box with off the shelf Intel 10GB Ethernet cards and filled it with solid state drives to match the capacity of the Netapp. I was able to purchase the Qnap box and the Intel 10GB cards for roughly the same cost it would have taken to get just the 10GB cards needed for the Netapp. I did have to spend extra for the solid state drives, but by the time I was done I had a NAS NFS powerhouse that could chew through virtually anything I could throw at it and I had the Netapp to use as a secondary storage server for local backups. This was money well spent for what our current needs dictated.

As time continued to pass and I adding additional services to the network, things again began to stress the overburdened Ethernet pipes in our organization. Additional virtual servers were added to increase the features and functionalities that our users needed but never knew were possible. These new services eventually became as necessary as our ERP system. These additional services began to stress the limits of the dedicated 10GB backplane that our Qnap was attached to. These additional virtual servers also pushed us to the point that high availability was no longer an option because there was not enough available resources left on our original three server stack to fail over to in the event of a disaster. What was a network administrator to do?

I immediately went to work building the case for a second virtual stack with local attached storage running RAID 10 on solid state drives. This would provide us with the I/O disk speed necessary to increase performance and also have the added benefit of splitting the I/O across all servers in the second virtual stack instead of going back to a single NFS NAS storage location and having a single point of failure. Each new server was outfitted with dual port 10GB Intel Ethernet adapters to simultaneously connect to the Qnap NAS box, as well as eight 1GB network adapters to connect to the base network. The virtual servers on the original stack were migrated to the new servers on the second stack using Veeams’ replication and failover capability and the original servers were then repurposed as fail over machines in the event of a disaster, again using Veeam replication. The Netapp is being repurposed as a backup location for Veeam to keep long term backups of the virtual machines, while the Qnap is being used to store the more recent replication points as the DR storage facility.

Now granted this took VMware’s high availability out of the picture by using DAS, but our shop doesn’t run 24x7 and can withstand a few minutes of down time while we manually failover using Veeam to our disaster recovery stack. I also understand I can configure Veeam to automate the failover process as well, but it was a personal choice to do manual failover so unintentional failovers didn’t occur if false positive detection happened through the software. Additionally this also freed up additional resources on the original servers since the resources we were keeping in reserve for high availability were no longer necessary. As a side note we were also running Zerto to an offsite stack, so if the building blows up tomorrow we can always fire up the offsite terminal server and run through the cloud. My hope is that this current setup will last us for a while, but as you know nothing lasts forever, so I’m already thinking outside the box on my next revision running VMware VSAN when this setup is due for its next refresh. There you have it, most folks run from DAS to NAS or SAN, but this is the story of my regression back to DAS and right now I couldn’t be happier.