Hey everyone, I know it’s been a while since I last posted here. That is mostly due to a version upgrade to Daddy v2.0. My lovely wife Cara and I welcomed our second son Rocco Xavier Scuola into the world on December 13th and it’s been a bit of adjustment for our entire house. My blogging has been directly affected as well but I’m hoping to change that going forward.

One thing that I was able to get done was my vExpert application. You can see the VMTN Blog post on the application process here:

I owe A TON of credit to the guys at the NYC VMUG. The leaders of the group are some of the most inspiring, positive, knowledgeable and helpful people that I have met. I’m lucky enough to not only be part of the same group as them but I can call a bunch of them my friends. There are currently 11 vExperts from the NYC VMUG. Each one of them encourages all of the VMUG members to join their ranks. That’s what I’m here to do as well.

How did I become a vExpert?

Well, as my buddy Ariel Sanchez (@arielsanchezmor) said, “Overall, I think he just didn’t know he was a vExpert, but he has been doing the role for years now!”. This filled me with such a sense of pride and accomplishment. What did it mean though? The more I’ve thought about it since applying, I’ve realized that it’s about being part of the community and embracing others to join you and help evangelize virtualization and VMware. There are so many ways you can go about that. Some people are amazing bloggers (I’m not there yet but I hope to be one day), others are VMUG leaders, others have YouTube channels, some give vBrownBag presentations and there are a number of other ways to get there.

Personally, I try to be active in official VMUG meetings and unofficial meetings such as Whiteboard Sessions, vBeers, Other User Group meetings etc. as well. I’ve given a presentation on Hyper-Converged Infrastructure Fundamentals (HCI Presentation) and I hope to do more presentations this year. I also try to post any and all interesting VMware-related articles that I come across to my Twitter (@NScuola) and LinkedIn (NicholasScuola) accounts. I blog (spookysolutions.com) from time to time as well, and I plan to do that with much more frequency. I’m in the process of building out my home lab and testing a large number of things from vSphere 6.5 to VSAN to NSX to vRA. Stay tuned. My favorite way of participating in the community is just talking shop with other members of the community. This doesn’t just have to be during a VMUG meeting. It can be over beers during a conference, at the airport waiting for a flight, in a Slack channel or online forum like Reddit. There are plenty of ways to talk to people. Most of us have similar backgrounds in the industry and have probably gone through some of the same upgrade/install/troubleshooting steps. It’s always fun for me to hear about the experiences of others and find ways that I can improve my process or find different ways of accomplishing the same tasks going forward. Every once in awhile I’ll even help someone out with some of my own experiences.

What does being named a vExpert mean to me?

Being named a vExpert showed me that my contributions in the virtualization community actually mean something. It also provides access to some of the sharpest minds in the VMware space. I have been able to not only meet some people whose blogs I’ve followed for years but I’ve been able to interact with them directly and have meaningful conversations around the technology. It also shows me that anyone could do this. I’ve named a few of the different avenues you can take to get to a vExpert but the most important thing is that you have to dive in and do it.

Sometimes the hardest part is believing in yourself enough to give it a shot. I’m living proof that the effort pays off. If you have any questions, or would like to pursue the vExpert award yourself, feel free to contact me and I’ll do my best to put you on the right path so you can join our ranks. One of the biggest joys of being part of the club is helping others join us.

I was given the honor of presenting at the October 19th NYC VMUG meeting (that’s VMware User Group for my non-virtualization friends). The topic that I was asked to speak about was Hyper-Converged Infrastructure (HCI) Concepts. I am by no means an expert on the topic but given my numerous years dealing with infrastructure, I know enough to get by. I wanted to share my presentation with you all. The presentation is at a high level just going over the key points for those new to the space. I tried to keep it at a level that my non-technical wife can understand what I’m talking about.

Side Note: I run all this stuff by her. If I can keep her attention, I should be able to keep yours. 🙂

Here are the key points that I touched upon:

– A little history lesson on how VMware changed the supply chain when it comes to deploying servers and applications to customers (or users).

– An overview on the journey from Converged to Hyper-Converged.

– A run down of the different infrastructure approaches (Traditional, Reference Architectures, Converged, Hyper-Converged).

– Different players in the market today.

– Use cases.

– Where HCI fits in and some of the drawbacks.

I must say, I was humbled by the response that I received from everyone at the meeting. The feedback was overwhelmingly positive and I helped clear up a few points of confusion for some of the attendees. They were actually able to get something out of this and hopefully you will too.

A little over three years ago my wife introduced me to the wonderful world of 5 year plans. At first I was skeptical. I may have even thought of it was a little lame (Don’t tell her that though). Even still, I humored her and created one. We actually did 1, 3 and 5 years plans based on our personal, professional and family goals. I’m not going to go into all of the details of my plan but there is one section that I am going to touch on. Under my professional goals, I had a an education section. This section was mostly based around IT Certifications. I had a 1 year goal of re-certifying my VCP5-DCV (Achieved!), a 3 year goal of achieving a VCP5-DT (Achieved!), and a 5 year goal of achieving a VCAP5-DCA. As you can see I completed 2 out of 3. Technically, I still have 2 years left to achieve my goal but the universe had different plans for me.

I’ve actually paid for the majority of my training and certs out of pocket. So the cost of exams is a factor on when I can actually take them. The VCAP is not cheap. Last time I checked it was around $400. This is an advanced level exam so this isn’t surprising but it’s still a lot of money especially for those of us with families. VMware does provide beta exams though and they come at a fraction of the cost. In this case the VCAP6-DCV Deploy exam was only $100. The only problem was that I didn’t have a lot of time to study. I figured I’d give it a shot though. Even if I failed, at least it would be money well spent, I could experience the exam first hand and see where I stood. There was another outcome though, I could pass, WHICH I DID! I achieved my goal, well ahead of time and saved some money to buy my little guy more Thomas The Tank Engine trains. 🙂

This post is going to document my experience and any tips that I may have to help others achieve their goals of becoming VMware Certified Advanced Professionals. Here’s how I did it.

Exam BlueprintYou can find all the information that you’ll need regarding the topics covered, how to register, exam fees, recommended training and other helpful hints directly from VMware on their exam blueprint page.

Your PeersThis one is perhaps the most important item that I’m going to talk about not just for this or other exams but for any issue you run into at work or in life. One of the greatest venues to talk to your peers is the VMUG. I’m lucky enough to be part of one of the best chapters around in NYC. These guys and gals love what they do, are extremely talented and have diverse backgrounds from every industry that you can think of. If you have a goal in mind, chances are there is someone else in your group that has the same idea in mind or has already achieved it. My VMUG leaders are always willing to help out or give guidance where they can. I highly recommend joining or starting a study group nearby or online. There are plenty of LinkedIn & Google+ groups filled with individuals just like you that want to pass this and other exams. I’m always here to help where I can as well. You can find me on Twitter at @NScuola.VMware Hands On LabThose of you that have never heard of VMware’s Hands On Labs are really missing out. Not only are they really in depth but the material is coming straight from the horse’s mouth. The interface is nearly identical to what you’ll use on the actual exam as well. The material is extremely helpful not just for the exam but it may help you at the office as well. It’s also much cheaper than standing up a home lab. Here are some of the specific labs that I went through.

PLURALSIGHTOne of the tools that I use in my certification endeavors is a paid PluralSight account. PluralSight is a great resource for video training on a variety of subjects. The courses that I went through included but weren’t limited to the following:

These courses go into great detail and all real world examples of how to install, configure and troubleshoot the different components involved with vSphere.

BLOGSJust do a search for VCAP exam experience and you’ll find endless experiences from people that were successful and others that weren’t. Each experience should provide you with helpful information that will help you in your attempt.

In closing, this exam is tough. There are no shortcuts. You’re going to need to do the work. There is a time crunch that will get you if you let it. I’d recommend taking notes on each item and knocking out the questions that you know and returning to the ones that you don’t at a later time. I actually missed 2 questions entirely because I ran out of time. The interface is very similar to the hands on labs that VMware provides and you can actually see exactly what it looks like here. There is access to documentation as well but it will chew up a lot of time searching so try leaving the questions that you’re stuck on for the end.

Keep in mind this is a 3 hour exam. Make sure that you’re hydrated and have used the restroom prior to going into the exam room. When you’re sitting down for this long, you’re going to want to be comfortable. It’s pretty tough to concentrate if you’re not.

At the end of the day though, if you study to the best of your abilities and can successfully complete all of the objectives on the blueprint, there is no reason you can’t pass this exam too. Good luck!

Stop me if you’ve heard this one before. It’s Monday morning, you open up your newsreader and there are 25 different articles about an exploit that has been found that is sweeping the net. It affects nearly 90% of systems out there. You know it’s only a matter of time until this news goes from being on the tech sites only to the Wall Street Journal and The New York Times. Once that happens, the alert level hits red. Now all of your C-Level execs are aware of the problem and someone is going to be calling you asking for a status update. If you’rePeter Gibbons, you may even have 8 different people calling you. Where do you go from here? In the old days, this would mean, any plans you had for that weekend were scrapped. You’d now have to coordinate outages with your application teams, IT staff, sometimes you’d even have to get your building’s security team involved. You’d also have to break the news to your wife, husband, girlfriend, boyfriend, kids, or whoever that you may not see them again until Tuesday (assuming that all goes well). Then you get to go through this scenario:

Planning and executing the downing all of your affected DEV/TEST systems.

Preemptively opening cases with your vendors in case you run into an issue (you would hate to get stuck in the queue without a case number while your systems are down)

Downloading and applying the patches to fix the vulnerability.

Bringing all of said systems back up and running.

Contacting all of your applications owners once the systems are back up and having them test all of the applications.

Squeeze in a phone call to your loved ones asking about how life is on the outside.

Notifying all of your users that the systems are back up and running and that now regular weekend work can commence.

Once all of this is done, and you’ve verified that everything is OK and there are no issues, you can now plan to do the same thing to your Production systems. YAY! That usually means another weekend down the toilet.

Many times, some of the pain involved with this type of maintenance can be lessened through mechanisms like vMotion, Exchange DAGs, and clustered systems in general. Typically, you patch each of the secondary nodes in the cluster, then you patch the primary node and you’re good to go. This process ofupgrading different cluster nodes can take hours depending on the size of your environment and requires total concentration and focus. If you run into an issue during a failover, you’ll be happy you opened that support case.

Why do I bring all of this up? Traditionally, the one system that usually has the biggest issues during this kind of upgrade/update scenario is your storage environment. Especially if you are on legacy storage for one reason or another. In most cases that I have seen, storage code upgrades are completely ignored unless absolutely necessary. I can see why people make that argument. If your storage goes down, especially in a small to medium sized shop, EVERYTHING goes down. This scares the pants off of a lot people, with good reason. They would rather take the “If it ain’t broke, don’t fix it.” approach of yesteryear. Nobody wants to run into those kinds of problems and lose their weekends because of storage issues. This kind of thinking leads to rolling the dice and hoping that the storage environment will just keep on chugging along and that no one will exploit the vulnerabilities that are out there. I think this model is changing in storage though, along the same lines that the break/fix mentality was replaced with a proactive approach. IT departments are getting more sophisticated and are looking to get everything patched and protected BEFORE someone tries to exploit the vulnerabilities.

What if you, the IT engineer, could avoid those sleeping in the office kind of issues and get your weekends back? Who would say no to that? As I’ve written about in the past, I’ve been a customer ofPure Storagefor about two and a half years now. I started out on an FA-320 array, I’m currently using their FA-400 series and I’m getting ready to start playing with the FlashArray//m as soon as it arrives. One of the things that sold me on Pure Storage was the Non-Disruptive Upgrade (NDU) capabilities for both the software and the hardwareof the array (you can see a demo of their NDUhere). I’ve gone through almost every iteration imaginable. I’ve done code upgrades (both minor and major revisions), I’ve added additional shelves of disk, I’ve gone from 300 series to 400 series controllers, you name it and I probably done it. The one similarity in every upgrade was that it happened like they said it would happen. No downtime, no performance degradation, no idea that it was happening from a user perspective. They were all quick, seamless, and pain free. They also happened during the week (we played it safe and did them on Friday evenings for our Production units) but on Saturday morning I was home playing with my little boy which is what I care about most.

As I said earlier, this approach appears to be the new status quo. Many other vendors besides Pure Storage are trying to follow suit. EMC has stated that they now support NDU’s (although I’m not sure that is the case for different hardware versions). Other vendors such as Solid Fire and Nimble also support NDU’s. This is a direction that I think everyone in IT welcomes. Being able to provide services quickly to the end user without disturbing their workflow is the goal of nearly every IT staff. This new model greatly increases the success rate of achieving that goal. Pure Storage has gone one step further and changed the typical storage lifecycle model around this principle when they launchedEvergreen Storage. The belief is that forklift upgrades will go the way of the dodo bird and you can just replace individual components when needed. Your maintenance never increases (unless you add capacity). Your storage system can stay the same for as long as you need it too saving you tons of money in the long run while also providing you with a solid foundation to house your infrastructure on.

If other systems start following suit and rethink how we look at system lifecycles, the end result can be great for IT Admins. What if it was as easy to upgrade the code on your core switches and routers as it is to upgrade an app on your iPhone? What if said code could be upgraded FROM your iPhone while you’re sipping margaritas on a beach somewhere (just don’t drink too many until the upgrade is done)? What if upgrading your email servers wasn’t a 6 month project? Whether it’s PC refreshes, server upgrades, or application upgrades, a pain-freeprocess is something everyone would welcome and what we currently strive for as IT pros. It’s nice to see that not only can we make end users’ lives easier,I think it’s time that we make our own lives easier as well. Don’t we as IT admins deserve the same level of happiness and time away from the office as our users do? I sure think so. I think you all would agree with me. It’s nice to see that vendors like Pure Storage share that same vision and are doing something toachieve it.

I bet a lot of you are reading this and saying to yourself, “I’m already an awesome IT Professional!” You know what? You’re right! The fact that you would read something to try and better yourself even though you’re already awesome is one of the many things that makes you awesome. For some of the newbies to the field or the ones that may find themselves in a rut, or even those that just want to get to the next level of their careers, this guide is for you.

What makes an awesome IT Professional? How can you spot one? How can you become one? How can you tell if you’re going in the other direction? Here are some of my tips for being an awesome IT Professional.

BE RELENTLESS

I’m not saying you need to be a nag or that annoying telemarketer from Sirius XM who keeps calling you to renew the free subscription you got when you bought your car. Being relentless is about looking at your career as a lifelong quest to improve yourself. Keep on learning. Read blogs. Build home labs. Take and pass certification exams. Join user groups. Do whatever you can to learn everything about your field. I look at it this way, whenever someone on my team has a question or can’t figure out how to do something, I want them coming to me first for an answer. Better yet, I want to have an answer to give them. No one has all the answers but that doesn’t mean you shouldn’t try to learn as many of them as you possibly can. Try to be the best you can be both professionally and personally. Never stop pursuing your goals. It’s never too late to get there.

BE ORGANIZED

I can’t emphasize this one enough. You don’t need to be super anal but you need to be close to it. Document everything that you can. If you’re rolling out a new application, have all of the IP’s and spreadsheets done before hand so you’re just reading off a prepared list. When an issue comes up you don’t want to be fumbling for information. You want it to be easy to find and intuitive. Uniformity in your environment goes a long way in this regard. If you have multiple sites and your gateway has a similar IP at each location it takes some of the guesswork out of troubleshooting. It also makes things easier for new employees to get up to speed if your environment is set up logically. Use tags, use Organizational Units, put descriptions on your router interfaces. It may be more work up front but in the long run, it will help you exponentially. This goes for your day to day work too. Organize your email into folders, use contacts, do all of the things that you know you SHOULD do but probably aren’t. Keep notes when issues arrive so you have something to refer back to if it pops up again. The best IT Professionals do the majority if not all of these things.

BE LIKE WATER

“Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves.

Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend.”

― Bruce Lee

Those of you who get the reference probably know where I’m going with this. This has been a common theme in my life. You should be able to adapt to any situation that arises quickly and easily. A similar motto is used in the Marine Corps. Adapt (improvise), and overcome. Most IT Professionals will deal with an end user or customer in one way, shape or form. Every user is different. Every user has a different personality and you’ll have to adjust to the personality that you are faced with. Some are pleasant to deal with, some can be complete and total nightmares. You’re not going to know at first until you are faced with the situation. The same goes for selling to a customer, you’re not going to know how to approach a customer until you hear what their situation is currently and where they are trying to get to. Once you have a clear understanding of what they are dealing with, you’ll better know how to assist. Be water, my friend.

BE A DETECTIVE

A lot of IT troubleshooting comes down to identifying a problem and resolving it as quickly as you can. A particular issue may come up but have more than one reason for why it is occurring. Take for example a user not having internet access. This could be DNS related, it could be a bad cable, it could be a bad network device, or it could be a problem with the ISP. You’ll need to find a way to go through all of the possible causes and come up with a resolution while keeping an open mind that every situation is different and may require different troubleshooting methodologies. You may not know the answer at first and you may need to do a little detective work to find the solution. How do you know where to look? This comes with experience, every IT Professional has their own way of doing things but one things mostly everyone share is the use of Google. As an IT Professional you will spend countless hours searching through KB articles, blogs and obscure tech websites searching for the one page that someone created after going through the same mess you’re going through. Once you have a good way of troubleshooting any problem can be resolved. Knowing the answer is sometimes less important than being able to find the answer. No one has all the answers.

BE APPROACHABLE

This one is often overlooked. How many of you know of a super smart IT Professional who always knows what the issue is but they are a complete pain in the ass to deal with? Almost as if you’re being a horrible person for asking them to their job. I’m not going to lie. This was me for a few years there. I was a nightmare. I just wanted to sit in the air conditioned datacenter and pound away at all the project work I could find but I didn’t want to interact with anyone. It may have been all of the years of support that chipped away at me, I don’t know. At the end of the day though, it was counter-productive. You need to build relationships to be successful. Whether it’s with your end users, management, your vendors, or your peers. At some point, you’re going to need help in procuring equipment, a reference, or even a new job. This also goes for managers. You want your employees to be able to come with you with issues, ideas or general concerns while having the feeling that they can have a constructive conversation. Building relationships with those around you will help you find answers in situations where looking for them yourself is not working. It will also grow your network for years and years to come.

BE COLLABORATIVE

Unless you are in the smallest of companies, you work with a team. You all have similar (if not the same goals) so why not pool your resources? If you are strong in a particular area and you know that one of your comrades is not, bring them up to speed. Sit down with them and try to explain as much as you can. Cross train with each other. If you know virtualization and the person next to you knows networking, you can both be two-trick ponies. It will also help give you different perspectives on technologies that you may not have had before. In the end you’re only as strong as your weakest link. If your team is strong, you all look good. If one person isn’t, the same holds true. You ALL look bad. I’ve found them some of the best relationships I have to this day are with people that I worked side by side with for months and months collaborating, sharing ideas, and discussing the task at hand with. Some people out there take the opposite approach and like to horde knowledge and info like a pack rat. What happens when they are not there? If you’re the only one who knows something and you go on vacation, guess who’s getting a phone call on the 4th tee. Yep. No one likes getting work calls on vacation. If everyone on your team is in the loop though, you won’t be getting a phone call because the issue will be resolved before it gets to that. Everybody wins.

BE RESTED

As I mentioned earlier, I hit a point a few years back where I wasn’t the greatest person around. I can look back on that time now and realize that I was burned out. I was working crazy hours, I was working on multiple projects and I was re-certifying a bunch of my certifications all at the same time. That translates to not a lot of sleep and a level of irritability seen only in garbage cans on Sesame Street. One of the things that really helped me was when I realized that there are only so many hours in a day. You’re not going to finish everything you set out to do every single day. You need to manage your time wisely and learn how to prioritize your tasks. This will go a long way to ensuring that you are productive while not hitting the wall because you tried to do too much all at once.

These are just a few of the guidelines that I’ve used in my career. If you already do some or all of these things you’re probably already awesome. No matter how great you already are you can always improve. Not just in IT but in life as well. The journey is a marathon, not a sprint. BE AWESOME TODAY!!

I’m sure you all have even more great ideas that I’m forgetting or I just didn’t think of BE COLLABORATIVE and share them with the group. 🙂 As always, comments are welcomed and encouraged. Thanks for reading.

After a ton of positive feedback on my last post (thank you all for that), people wanted to know more. Specifically, how did I come to the decision on what product was right for my environment? Hopefully, this post will help guide you in the right direction and maybe point something out that you didn’t think of previously. I’m going to do my best to generalize this so you can compare and contrast vendors on your own. Every environment is different so you’ll have to cater these guidelines to your situation. No one is going to know what you need better than YOU! This actually leads me into my first point

Identify Your NeedsThis step is the most important in my opinion but it is often the most overlooked. Why are you looking at new storage in the first place? Are you experiencing a performance problem that you (and/or your current vendor) cannot resolve? Are there limitations with your current setup that are preventing you from providing the necessary services that your customers require? Is there a new project or initiative at your firm that is presenting you with a new set of requirements altogether? An example of this is when your clients request storage replication for DR/BCP purposes where there was no need prior. Or is it a situation where your array was installed while Saved By The Bell was still on the air and it’s just time for you to find out what the latest and greatest product is and how fast you can get it installed in your environment? Also, do you need Fibre Channel, iSCSI, direct attached or something else entirely? Once you have a clear and concise understanding of what you are looking for and why, the rest of the search is much simpler.

CostUnless your name is Tony Stark, Bruce Wayne or Richie Rich cost is a major factor in any IT purchase. You’re going to have a budget that you need to stick to and you also need to get the most bang for the buck. This is a step that can get very tricky if you don’t have a clear picture of your environment. Obviously, the cost of the array itself and the associated support & maintenance are huge factors in what your overall spend will be. There are other things to consider as well.

What does your environment look like now?

Are you in a Co-Lo facility?

What is your current monthly OPEX spend from a power, cooling and rackspace perspective?

What are your power requirements? Does your current array require dedicated circuits to run? What is the additional cost of those circuits?

How many rack units and/or full racks does your current setup use? How many do you have available?

What is the total $/GB(or TB)?

Are there additional costs consideration? Will your existing SAN support the additional port density? Will you need to purchase additional networking equipment, or cables to support the new requirements?

Are there software costs to consider? Do you have to license individual features such as replication, snapshots, etc? Or is it included with the cost of the array?

What are the costs for support and maintenance? Do these costs increase substantially over time or will they remain flat for the lifespan of the array? Does maintenance entitle you to any new features or hardware?

Better yet, will the solutions that you are looking into DECREASE any of the above mentioned costs? Will you save money on monthly OPEX costs thus lowering the TCO for your solution?

These are some of the things that you need to consider when calculating what your total spend will be. I’ve never met a CxO that likes to be surprised by large increases in their monthly or yearly budget that they didn’t plan for. It usually means a nice conversation with the CFO which never ends well for the CxO and ultimately it doesn’t end well for the person responsible for the increase.

PerformanceNow that you know what your needs are and how much you can spend on your shiny new array, it’s time to get down to business. It has to live up to the hype. You’re going to step in front of your boss in a conference room with a fancy PowerPoint presentation that took you 6 weeks to prepare since you’re a technical person not a PowerPoint guru. You need to justify this exorbitant expense that you are throwing in front of them. The array HAS TO perform. If you are looking at a new array to resolve a performance issue it DEFINITELY needs to perform. You’re going to be looking at All-Flash Architectures, Hybrid arrays, solutions that leverage tiering, server-side solutions, you name it, and I’m sure it will pop up during your search in one way, shape, or form. Once again, the only one who can tell you what is right for you, is you. Make sure you perform baselines before you start looking at solutions so you know what your IOPS, Latency and Bandwidth requirements are. It will help narrow down the possible solutions that suit your needs.

CapacityAlong with performance, question 1A is usually “How much space do I need?”. Seems like a pretty obvious question as well. Along with how much space you need, you should be asking yourself, why so much space? Are you just looking for a performance enhancement but the capacity that you have is more than sufficient? You have 100TB now so you’ll get 100TB on my new array? Are you taking growth into consideration? Is what you’re buying now sufficient to hold you over for the next 3-5 years and beyond? How difficult is it to add new capacity to the array you want to purchase a year from now, 3 years from now or 5 years from now? Can capacity be added non-disruptively? (HUGE POINT in my opinion) What type of storage are you looking at? Are you looking at tiers, all-flash, SAS, SATA? How much of a concern is speed? What type of data will be stored on the array (VMs, Databases, Email, Archive, File)? This is an area you need to be relatively sure of prior to purchase or make management aware that additional capacity may be needed in the future. You don’t want to walk in to your CxO’s office 18 months after you buy an array asking for more money because you didn’t buy enough disk. Depending on your CxO, that can turn into a resume generating event.

FeaturesNow that you know how fast your disk needs to be and how much of it you need, it’s time to look at the other factors that you should consider. For me, the first was simplicity. I’ve worked with at least a dozen different arrays. The bottom line is storage is not the easiest area to deal with if you are not a seasoned storage vet. Especially when you get into the hundreds of terabytes and petabytes. Smaller shops usually feel the pain of this a little more than enterprises do. They may have really good Windows & VMware admins but most of the jack-of-all-trades guys learn storage last. Enterprises usually have dedicated storage teams that only do storage day in and day out. Having an array that is easy to configure and more importantly easy to manage should definitely be on your checklist if you are a novice or even if you’re a top tier storage admin. You’ll need your time to manage the legacy environments that are still lingering. The top of your list should also contain Non-Disruptive Upgrades (NDU). We all know what a pain having to schedule downtime for an array can be. You basically have to bring down EVERYTHING and hope it comes back up normally. Wouldn’t it be nice if that went away and you could upgrade your array as easily as you upgrade an app on your iPhone? There are other features that you should look for like Deduplication, Snapshots, Replication and hypervisor compatibility for virtualized shops. VAAI support makes a huge difference in vCenter environments. You’ll also need to figure out how easy it will be to migrate your data. If you’re a VMware user, it should be as simple as a Storage vMotion. Physical hosts can be a little trickier but most vendors will provide guidance and assistance when necessary. A lot of the features that you’ll need will be extremely apparent just from dealing with your current situation. You know what you like and what you don’t, now is your time to fix all of those issues that you’ve hated for years.

Next StepsMeet with vendors, lots of them. See what you like and dislike from all of them. Try to gauge which solution meets your needs. You should have the knowledge at this point of what you need, what is most important to you and how much you can spend. Try to get the most bang for your buck. One thing to remember is that you are the customer and you have to do what is right for YOUR company. Making a sales person happy is not your job, making your end users and your management happy is. When all is said and done, if you’re still not sure, make like you’re buying a new car. Take it for a test drive. Most vendors can set up Proof of Concept (POC) boxes for you and you can test the array with your own data. Nothing will show you if a solution will work better than slapping a copy of your VMs on the box and going to town on them. Run the reports you normally run, try your backup jobs, run all of your applications at as close to a production load as you can. What you put into your testing will show tenfold when the production array shows up. You’ll now have a familiarity with the array and you’ll have reasonable expectation on how it will perform. If you took baselines like I suggested earlier, you’ll even have data to compare to. Also, speak to your peers and read up as much as you can. There are plenty of engineers and admins that have gone through this process before you. Don’t try to reinvent the wheel. Use all the help that you can find around you. Hopefully you have done your homework and you’ll be on the right track to storage happiness.

For those of you who are curious, here’s a simple breakdown of what my evalutation looked like. Obvious I went into much more detail during my search but this proves that you can figure out your needs with just a few bullet points.

Identify Your Needs– Fast performing, small footprint, low power consumption, cut down on FC ports if possible since we’re nearing capacity on our SAN.Cost – Had to stay within my budget (numbers withheld for confidentiality reasons)Performance– Must be able to run Tier 1 apps without affecting other apps and servers running on the array.Capacity– Expected growth was 150% over three years. Looked for double the usable capacity of current system. Must be able to add additional shelves as need arises.Features– Simplicity, NDU, Deduplication, Snapshots, Replication,Next Steps– Met with 10-12 vendors, performed 3 POC’s. Found an array that met the majority of my needs and the remaining needs were on their roadmap. We have Loved Our Storage ever since.

Hopefully this guide will help you in your search. I remember the pain that I went through during this process. I’d love to save you from going through the same. The thing to remember is that this is the tip of the iceberg. You still need to install the array and migrate your data. The quicker you can settle on what works for you, the quicker you can get down to the fun stuff. Feel free to reach out with any questions and please leave feedback if you can. Good luck in your search.

Disclaimer:This is an opinion piece meant to help all of the VMware Admins out there based on my own experiences. I’ve seen a lot of user reviews and figured what the heck? I should tell my story and as you’ll be able to tell, I’m not a writer 🙂

I’ve been a “VMware Guy” for a little bit now. It’s been about 10 years since I first started playing around with GSX Server (not a typo). I immediately knew that this thing was a game changer. It was a very rare feeling that I did not feel again for a long, long time. More on that later.

I’ve seen a lot of different environments in my time as an in-house admin and field engineer. I’ve seen a lot of things done right and just as many things done wrong. The goal in life of most IT guys (and gals) is to get people off their backs. They may not come out and say it but it’s the truth. The majority of their careers are spent listening to users complain about how the system doesn’t do what they want it to and then having to fix it so it does. VMware Admins face a similar challenge but in most cases they’re listening to other IT guys complain about how their server isn’t fast enough and needs more resources or that they need 15 new dev boxes in the next hour to test an application or that they’d rather have a physical server because VM’s aren’t as good. So the goal of a VMware Admin is to keep things running as smooth as possible without having to constantly mess with the environment. Simple is good.

Like I said earlier, I’m a VMware Guy. I started as a regular IT guy and morphed into what I am now. I do a lot of virtualization, some storage, some networking, some scripting when I need to and some Windows administration. Basically, I’m a modern day infrastructure guy. At this point I think it’s what is becoming the norm. IT guys need to do it all. Or at the very least, understand how it all works together.

In my last few years, I’ve been doing more storage related work. I’ve done Fibre Channel configuration and zoning, LUN creation, and provisioning, you name it, I’ve probably touched it in some way shape or form and to be honest, I’m not a fan. The work itself is fun but it has a limit. The payoff just isn’t there for me. Unfortunately though it’s a necessary evil. I’d rather spend time working with VMware but it won’t mean much without storage behind it. I always wanted my storage to just work but could never find a platform that didn’t require constant babysitting. That is, until I foundPure Storage.

After encountering some performance problems on one of my database clusters, we determined that the problem was the storage array. It was time to find a new way of doing things and the search was on. I’m not going to go into detail about the search itself (unless you want me to, leave comments below), I’m going to tell you about what the results did for me and my environment. Pure Storage’s all-flash array seemed way too good to be true. It was so easy to manage that for once I did not have to concentrate on making my storage work, it just did. Not only did my database cluster perform, it excelled! Obviously, performance should be spectacular with an all-flash array but it was all of the other benefits that really struck me:

Ease Of Use

The first thing that struck me about this product was how simple it was. I used to install products from other vendors and it was usually a FULL day affair. When the Pure engineer came onsite, I was expecting more of the same. What blew me away was the fact that I was ready to kick him out before lunch. When does that happen with any vendor install? It took longer to get the array out of the box than it took to configure it. I grew up using Windows, so I’m familiar with Disk Management. Most Windows guys (and gals) are. Bring the disk online, create a partition, format it and you’re off to the races. This was just as easy. The interface is clean, simple andveryintuitive. You don’t have to be a storage admin to use this product. With Pure, once your zoning and SAN stuff is done, you add a Host or Host Group to match your VMware environment, Create a volume, Rescan your storage in VMware, set your path selection policy (one line script) and you’re done.

No Bloat

One of the things that annoys me nowadays is everything comes with bloatware. Whether it’s a toolbar, a Java installer, your new smartphone, or a new PC, there’s always crap you don’t want bundled in. Same holds true with hardware. How many times have you gone through this? Array is en route, and the engineer sends you a checklist or pre-req list that includes the need for 25 IP’s, 3 Management VM’s, 200 GB of space for the VM’s, a specific version of Java. Who wants to deal with that crap? Pure on the other hands had a one page document, no VM requirements, no Java requirement and once again it was nice and simple. Give us your IP info, your time server info and if you want AD authentication, your domain controller info. Everything was scripted out ahead of time and once again the engineer was gone by lunch which means I can spend more time VMware-ing. Is that a word? If not, it should be. #VMware-ing

Storage Overcommittment

I don’t know about you but I’ve seen so many VMware environments where there is Thin Provisioning on the storage array and then there is Thin Provisioning at the vSphere level as well. This equals problems in most cases ranging from performance degradation from Thin Provisioning overhead to having arrays run out of space if capacity isn’t monitored properly. With Pure Storage, this is a moot point. Since they have data reduction inline, VMDK files can now formatted as Thick Eager and storage capacity no longer has to be managed in two places. All of those zeroes that would get written out on a traditional array are now just de-duped metadata. All of the performance benefits of having Thick Eager VMDK’s can now be realized along with simplified management of storage.

Provisioning Times

How many times have you received a request at 4:50 PM on Friday that someone needs a server and they need it by the end of the day? Most VM’s nowadays can be spun up within 15-20 minutes. So usually this isn’t the worst thing in the world but when your next train is an hour later and you need to be out the door by 5 PM on the dot it IS the worst thing in the world. With XCOPY functionality on the Pure Storage cloning from a template with customization usually takes between 9 and 12 seconds in my environment. More importantly, it means that I’m making my train and seeing my kid before bedtime.

Data Reduction

It doesn’t matter how big your environment is, I can guarantee you have duplicate data. If you’re a large virtualized shop, you have tons of dupes. How many times have you cloned a template? How many different copies of Windows system files are stored on your storage? More importantly, how much do those copies cost you? If you have 100 VM’s and there is a 10 GB Windows installation on each server, that’s basically 1 TB (if my math is correct) of data right there. I didn’t mention page files, duplicate apps and other instances of duped data. Basically you’re spending money for wasted capacity. On an array with data reduction like the Pure Storage array, you’d only have one instance of the data and that 1 TB would now become 10 GB. The other benefit or data reduction is being able to cram a lot more data into a smaller space. Hello Green Initiatives. So I can have a smaller footprint in my datacenter, requiring less equipment, power and cooling to host the same workload? Sounds a lot like the benefits of VMware to me.

Multiple Workloads

I may be dating myself but when I was a kid, I remember seeing a brand of shampoo that said “No More Tears” on the bottle. Now I see it a different way. “No More Tiers”. Does anyone enjoy configuring tiered storage? Seriously? Anyone? It’s a lot of work. A LOT OF WORK. At the end of the day, flash is going to smoke it anyway. So why waste the man-hours on configuring something that doesn’t work as well in my humble opinion? I haven’t seen a tiered system that compares in cost, configuration and performance to Pure Storage. It’s not even close. I may be a storage novice but this seems like a no-brainer. Also, now I can forget about having to configure multiple VMware Storage Profiles. The only tier that I have now is ONE. You can keep your database servers on the same storage as your print servers and domain controllers and the array will not blink. Everything becomes Tier 1. It some cases, it’s complete overkill. The simplicity of it all though is such a huge benefit that any additional cost (which is debatable, frankly) is totally worth it. How much do storage admins make? How many of them do you need in a tiered environment with 50 TB’s or more? How much more complicated does your storage and VMware setup become? Is it worth the price?

No Licensing

One of the other huge benefits, is the fact that everything is included. You do not have to license individual components. When a new feature comes out, it’s yours. Snapshots, Replication, VVOLS, it doesn’t matter. When it gets released, you perform an update and BAM! it’s on your array. It’s as easy as updating an app on your phone. Pure even went ahead and did the same with their hardware. “Love My Storage” is unbelievable. If you pay your maintenance, you get new controllers. No more forklifts, no more extortion at the end of your support contract. It simplifies your budget in ways that I have not seen when it comes to storage. You just get a product that works and will continue to work for years to come.

Let me try and sum it all up for you. Pure Storage has a product that is simple, easy to manage and extremely high performing. I left a lot out and I could probably keep going on each of these bullets for days and probably add a few more if I really thought about it. I know the market is changing and a lot of competitors have similar products but based on my own experience, Pure Storage is the best of the breed. If your array is coming up for renewal or you’re having problems with performance or complexity, I’d highly recommend that you give Pure Storage a look.